Oct 08 15:51:53 localhost kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct 08 15:51:53 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 08 15:51:53 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 08 15:51:53 localhost kernel: BIOS-provided physical RAM map:
Oct 08 15:51:53 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 08 15:51:53 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 08 15:51:53 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 08 15:51:53 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 08 15:51:53 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 08 15:51:53 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 08 15:51:53 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 08 15:51:53 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 08 15:51:53 localhost kernel: NX (Execute Disable) protection: active
Oct 08 15:51:53 localhost kernel: APIC: Static calls initialized
Oct 08 15:51:53 localhost kernel: SMBIOS 2.8 present.
Oct 08 15:51:53 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 08 15:51:53 localhost kernel: Hypervisor detected: KVM
Oct 08 15:51:53 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 08 15:51:53 localhost kernel: kvm-clock: using sched offset of 2634790188812 cycles
Oct 08 15:51:53 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 08 15:51:53 localhost kernel: tsc: Detected 2800.000 MHz processor
Oct 08 15:51:53 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 08 15:51:53 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 08 15:51:53 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 08 15:51:53 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 08 15:51:53 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 08 15:51:53 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 08 15:51:53 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 08 15:51:53 localhost kernel: Using GB pages for direct mapping
Oct 08 15:51:53 localhost kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct 08 15:51:53 localhost kernel: ACPI: Early table checksum verification disabled
Oct 08 15:51:53 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 08 15:51:53 localhost kernel: ACPI: RSDT 0x00000000BFFE16C4 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 08 15:51:53 localhost kernel: ACPI: FACP 0x00000000BFFE1578 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 08 15:51:53 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F8 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 08 15:51:53 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 08 15:51:53 localhost kernel: ACPI: APIC 0x00000000BFFE15EC 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 08 15:51:53 localhost kernel: ACPI: WAET 0x00000000BFFE169C 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 08 15:51:53 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1578-0xbffe15eb]
Oct 08 15:51:53 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1577]
Oct 08 15:51:53 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 08 15:51:53 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15ec-0xbffe169b]
Oct 08 15:51:53 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe169c-0xbffe16c3]
Oct 08 15:51:53 localhost kernel: No NUMA configuration found
Oct 08 15:51:53 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 08 15:51:53 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Oct 08 15:51:53 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 08 15:51:53 localhost kernel: Zone ranges:
Oct 08 15:51:53 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 08 15:51:53 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 08 15:51:53 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 08 15:51:53 localhost kernel:   Device   empty
Oct 08 15:51:53 localhost kernel: Movable zone start for each node
Oct 08 15:51:53 localhost kernel: Early memory node ranges
Oct 08 15:51:53 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 08 15:51:53 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 08 15:51:53 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 08 15:51:53 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 08 15:51:53 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 08 15:51:53 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 08 15:51:53 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 08 15:51:53 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 08 15:51:53 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 08 15:51:53 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 08 15:51:53 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 08 15:51:53 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 08 15:51:53 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 08 15:51:53 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 08 15:51:53 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 08 15:51:53 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 08 15:51:53 localhost kernel: TSC deadline timer available
Oct 08 15:51:53 localhost kernel: CPU topo: Max. logical packages:   8
Oct 08 15:51:53 localhost kernel: CPU topo: Max. logical dies:       8
Oct 08 15:51:53 localhost kernel: CPU topo: Max. dies per package:   1
Oct 08 15:51:53 localhost kernel: CPU topo: Max. threads per core:   1
Oct 08 15:51:53 localhost kernel: CPU topo: Num. cores per package:     1
Oct 08 15:51:53 localhost kernel: CPU topo: Num. threads per package:   1
Oct 08 15:51:53 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 08 15:51:53 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 08 15:51:53 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 08 15:51:53 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 08 15:51:53 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 08 15:51:53 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 08 15:51:53 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 08 15:51:53 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 08 15:51:53 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 08 15:51:53 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 08 15:51:53 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 08 15:51:53 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 08 15:51:53 localhost kernel: Booting paravirtualized kernel on KVM
Oct 08 15:51:53 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 08 15:51:53 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 08 15:51:53 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 08 15:51:53 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Oct 08 15:51:53 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Oct 08 15:51:53 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 08 15:51:53 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 08 15:51:53 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct 08 15:51:53 localhost kernel: random: crng init done
Oct 08 15:51:53 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 08 15:51:53 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 08 15:51:53 localhost kernel: Fallback order for Node 0: 0 
Oct 08 15:51:53 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 08 15:51:53 localhost kernel: Policy zone: Normal
Oct 08 15:51:53 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 08 15:51:53 localhost kernel: software IO TLB: area num 8.
Oct 08 15:51:53 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 08 15:51:53 localhost kernel: ftrace: allocating 49370 entries in 193 pages
Oct 08 15:51:53 localhost kernel: ftrace: allocated 193 pages with 3 groups
Oct 08 15:51:53 localhost kernel: Dynamic Preempt: voluntary
Oct 08 15:51:53 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 08 15:51:53 localhost kernel: rcu:         RCU event tracing is enabled.
Oct 08 15:51:53 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 08 15:51:53 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 08 15:51:53 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 08 15:51:53 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 08 15:51:53 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 08 15:51:53 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 08 15:51:53 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 08 15:51:53 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 08 15:51:53 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 08 15:51:53 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 08 15:51:53 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 08 15:51:53 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 08 15:51:53 localhost kernel: Console: colour VGA+ 80x25
Oct 08 15:51:53 localhost kernel: printk: console [ttyS0] enabled
Oct 08 15:51:53 localhost kernel: ACPI: Core revision 20230331
Oct 08 15:51:53 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 08 15:51:53 localhost kernel: x2apic enabled
Oct 08 15:51:53 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Oct 08 15:51:53 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 08 15:51:53 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct 08 15:51:53 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 08 15:51:53 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 08 15:51:53 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 08 15:51:53 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 08 15:51:53 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 08 15:51:53 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 08 15:51:53 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 08 15:51:53 localhost kernel: RETBleed: Mitigation: untrained return thunk
Oct 08 15:51:53 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 08 15:51:53 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 08 15:51:53 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 08 15:51:53 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 08 15:51:53 localhost kernel: x86/bugs: return thunk changed
Oct 08 15:51:53 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 08 15:51:53 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 08 15:51:53 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 08 15:51:53 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 08 15:51:53 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 08 15:51:53 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 08 15:51:53 localhost kernel: Freeing SMP alternatives memory: 40K
Oct 08 15:51:53 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 08 15:51:53 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 08 15:51:53 localhost kernel: landlock: Up and running.
Oct 08 15:51:53 localhost kernel: Yama: becoming mindful.
Oct 08 15:51:53 localhost kernel: SELinux:  Initializing.
Oct 08 15:51:53 localhost kernel: LSM support for eBPF active
Oct 08 15:51:53 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 08 15:51:53 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 08 15:51:53 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 08 15:51:53 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 08 15:51:53 localhost kernel: ... version:                0
Oct 08 15:51:53 localhost kernel: ... bit width:              48
Oct 08 15:51:53 localhost kernel: ... generic registers:      6
Oct 08 15:51:53 localhost kernel: ... value mask:             0000ffffffffffff
Oct 08 15:51:53 localhost kernel: ... max period:             00007fffffffffff
Oct 08 15:51:53 localhost kernel: ... fixed-purpose events:   0
Oct 08 15:51:53 localhost kernel: ... event mask:             000000000000003f
Oct 08 15:51:53 localhost kernel: signal: max sigframe size: 1776
Oct 08 15:51:53 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 08 15:51:53 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 08 15:51:53 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 08 15:51:53 localhost kernel: smpboot: x86: Booting SMP configuration:
Oct 08 15:51:53 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 08 15:51:53 localhost kernel: smp: Brought up 1 node, 8 CPUs
Oct 08 15:51:53 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct 08 15:51:53 localhost kernel: node 0 deferred pages initialised in 14ms
Oct 08 15:51:53 localhost kernel: Memory: 7765392K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 616512K reserved, 0K cma-reserved)
Oct 08 15:51:53 localhost kernel: devtmpfs: initialized
Oct 08 15:51:53 localhost kernel: x86/mm: Memory block size: 128MB
Oct 08 15:51:53 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 08 15:51:53 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 08 15:51:53 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 08 15:51:53 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 08 15:51:53 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 08 15:51:53 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 08 15:51:53 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 08 15:51:53 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 08 15:51:53 localhost kernel: audit: type=2000 audit(1759938710.990:1): state=initialized audit_enabled=0 res=1
Oct 08 15:51:53 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 08 15:51:53 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 08 15:51:53 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 08 15:51:53 localhost kernel: cpuidle: using governor menu
Oct 08 15:51:53 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 08 15:51:53 localhost kernel: PCI: Using configuration type 1 for base access
Oct 08 15:51:53 localhost kernel: PCI: Using configuration type 1 for extended access
Oct 08 15:51:53 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 08 15:51:53 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 08 15:51:53 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 08 15:51:53 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 08 15:51:53 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 08 15:51:53 localhost kernel: Demotion targets for Node 0: null
Oct 08 15:51:53 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 08 15:51:53 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 08 15:51:53 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 08 15:51:53 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 08 15:51:53 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 08 15:51:53 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 08 15:51:53 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 08 15:51:53 localhost kernel: ACPI: Interpreter enabled
Oct 08 15:51:53 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 08 15:51:53 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 08 15:51:53 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 08 15:51:53 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 08 15:51:53 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 08 15:51:53 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 08 15:51:53 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [3] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [4] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [5] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [6] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [7] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [8] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [9] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [10] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [11] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [12] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [13] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [14] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [15] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [16] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [17] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [18] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [19] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [20] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [21] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [22] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [23] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [24] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [25] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [26] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [27] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [28] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [29] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [30] registered
Oct 08 15:51:53 localhost kernel: acpiphp: Slot [31] registered
Oct 08 15:51:53 localhost kernel: PCI host bridge to bus 0000:00
Oct 08 15:51:53 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 08 15:51:53 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 08 15:51:53 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 08 15:51:53 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 08 15:51:53 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 08 15:51:53 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 08 15:51:53 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 08 15:51:53 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 08 15:51:53 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 08 15:51:53 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc180-0xc18f]
Oct 08 15:51:53 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 08 15:51:53 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 08 15:51:53 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 08 15:51:53 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 08 15:51:53 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 08 15:51:53 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc140-0xc15f]
Oct 08 15:51:53 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 08 15:51:53 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 08 15:51:53 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 08 15:51:53 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 08 15:51:53 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 08 15:51:53 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 08 15:51:53 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 08 15:51:53 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 08 15:51:53 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 08 15:51:53 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 08 15:51:53 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 08 15:51:53 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 08 15:51:53 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 08 15:51:53 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfea80000-0xfeafffff pref]
Oct 08 15:51:53 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 08 15:51:53 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 08 15:51:53 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 08 15:51:53 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 08 15:51:53 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 08 15:51:53 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 08 15:51:53 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 08 15:51:53 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 08 15:51:53 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc160-0xc17f]
Oct 08 15:51:53 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 08 15:51:53 localhost kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 08 15:51:53 localhost kernel: pci 0000:00:07.0: BAR 0 [io  0xc100-0xc13f]
Oct 08 15:51:53 localhost kernel: pci 0000:00:07.0: BAR 1 [mem 0xfeb93000-0xfeb93fff]
Oct 08 15:51:53 localhost kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref]
Oct 08 15:51:53 localhost kernel: pci 0000:00:07.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 08 15:51:53 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 08 15:51:53 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 08 15:51:53 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 08 15:51:53 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 08 15:51:53 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 08 15:51:53 localhost kernel: iommu: Default domain type: Translated
Oct 08 15:51:53 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 08 15:51:53 localhost kernel: SCSI subsystem initialized
Oct 08 15:51:53 localhost kernel: ACPI: bus type USB registered
Oct 08 15:51:53 localhost kernel: usbcore: registered new interface driver usbfs
Oct 08 15:51:53 localhost kernel: usbcore: registered new interface driver hub
Oct 08 15:51:53 localhost kernel: usbcore: registered new device driver usb
Oct 08 15:51:53 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 08 15:51:53 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 08 15:51:53 localhost kernel: PTP clock support registered
Oct 08 15:51:53 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 08 15:51:53 localhost kernel: NetLabel: Initializing
Oct 08 15:51:53 localhost kernel: NetLabel:  domain hash size = 128
Oct 08 15:51:53 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 08 15:51:53 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 08 15:51:53 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 08 15:51:53 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 08 15:51:53 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 08 15:51:53 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Oct 08 15:51:53 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 08 15:51:53 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 08 15:51:53 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 08 15:51:53 localhost kernel: vgaarb: loaded
Oct 08 15:51:53 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 08 15:51:53 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 08 15:51:53 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 08 15:51:53 localhost kernel: pnp: PnP ACPI init
Oct 08 15:51:53 localhost kernel: pnp 00:03: [dma 2]
Oct 08 15:51:53 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 08 15:51:53 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 08 15:51:53 localhost kernel: NET: Registered PF_INET protocol family
Oct 08 15:51:53 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 08 15:51:53 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 08 15:51:53 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 08 15:51:53 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 08 15:51:53 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 08 15:51:53 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 08 15:51:53 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 08 15:51:53 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 08 15:51:53 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 08 15:51:53 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 08 15:51:53 localhost kernel: NET: Registered PF_XDP protocol family
Oct 08 15:51:53 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 08 15:51:53 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 08 15:51:53 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 08 15:51:53 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 08 15:51:53 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 08 15:51:53 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 08 15:51:53 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 08 15:51:53 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 08 15:51:53 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 73093 usecs
Oct 08 15:51:53 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 08 15:51:53 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 08 15:51:53 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 08 15:51:53 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 08 15:51:53 localhost kernel: ACPI: bus type thunderbolt registered
Oct 08 15:51:53 localhost kernel: Initialise system trusted keyrings
Oct 08 15:51:53 localhost kernel: Key type blacklist registered
Oct 08 15:51:53 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 08 15:51:53 localhost kernel: zbud: loaded
Oct 08 15:51:53 localhost kernel: integrity: Platform Keyring initialized
Oct 08 15:51:53 localhost kernel: integrity: Machine keyring initialized
Oct 08 15:51:53 localhost kernel: Freeing initrd memory: 86104K
Oct 08 15:51:53 localhost kernel: NET: Registered PF_ALG protocol family
Oct 08 15:51:53 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 08 15:51:53 localhost kernel: Key type asymmetric registered
Oct 08 15:51:53 localhost kernel: Asymmetric key parser 'x509' registered
Oct 08 15:51:53 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 08 15:51:53 localhost kernel: io scheduler mq-deadline registered
Oct 08 15:51:53 localhost kernel: io scheduler kyber registered
Oct 08 15:51:53 localhost kernel: io scheduler bfq registered
Oct 08 15:51:53 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 08 15:51:53 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 08 15:51:53 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 08 15:51:53 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 08 15:51:53 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 08 15:51:53 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 08 15:51:53 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 08 15:51:53 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 08 15:51:53 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 08 15:51:53 localhost kernel: Non-volatile memory driver v1.3
Oct 08 15:51:53 localhost kernel: rdac: device handler registered
Oct 08 15:51:53 localhost kernel: hp_sw: device handler registered
Oct 08 15:51:53 localhost kernel: emc: device handler registered
Oct 08 15:51:53 localhost kernel: alua: device handler registered
Oct 08 15:51:53 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 08 15:51:53 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 08 15:51:53 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 08 15:51:53 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c140
Oct 08 15:51:53 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 08 15:51:53 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 08 15:51:53 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 08 15:51:53 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct 08 15:51:53 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 08 15:51:53 localhost kernel: hub 1-0:1.0: USB hub found
Oct 08 15:51:53 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 08 15:51:53 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 08 15:51:53 localhost kernel: usbserial: USB Serial support registered for generic
Oct 08 15:51:53 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 08 15:51:53 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 08 15:51:53 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 08 15:51:53 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 08 15:51:53 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 08 15:51:53 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 08 15:51:53 localhost kernel: rtc_cmos 00:04: registered as rtc0
Oct 08 15:51:53 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 08 15:51:53 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-08T15:51:52 UTC (1759938712)
Oct 08 15:51:53 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 08 15:51:53 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 08 15:51:53 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 08 15:51:53 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 08 15:51:53 localhost kernel: usbcore: registered new interface driver usbhid
Oct 08 15:51:53 localhost kernel: usbhid: USB HID core driver
Oct 08 15:51:53 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 08 15:51:53 localhost kernel: Initializing XFRM netlink socket
Oct 08 15:51:53 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 08 15:51:53 localhost kernel: Segment Routing with IPv6
Oct 08 15:51:53 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 08 15:51:53 localhost kernel: mpls_gso: MPLS GSO support
Oct 08 15:51:53 localhost kernel: IPI shorthand broadcast: enabled
Oct 08 15:51:53 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 08 15:51:53 localhost kernel: AES CTR mode by8 optimization enabled
Oct 08 15:51:53 localhost kernel: sched_clock: Marking stable (1299003720, 144306750)->(1584175570, -140865100)
Oct 08 15:51:53 localhost kernel: registered taskstats version 1
Oct 08 15:51:53 localhost kernel: Loading compiled-in X.509 certificates
Oct 08 15:51:53 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 08 15:51:53 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 08 15:51:53 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 08 15:51:53 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 08 15:51:53 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 08 15:51:53 localhost kernel: Demotion targets for Node 0: null
Oct 08 15:51:53 localhost kernel: page_owner is disabled
Oct 08 15:51:53 localhost kernel: Key type .fscrypt registered
Oct 08 15:51:53 localhost kernel: Key type fscrypt-provisioning registered
Oct 08 15:51:53 localhost kernel: Key type big_key registered
Oct 08 15:51:53 localhost kernel: Key type encrypted registered
Oct 08 15:51:53 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 08 15:51:53 localhost kernel: Loading compiled-in module X.509 certificates
Oct 08 15:51:53 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 08 15:51:53 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 08 15:51:53 localhost kernel: ima: No architecture policies found
Oct 08 15:51:53 localhost kernel: evm: Initialising EVM extended attributes:
Oct 08 15:51:53 localhost kernel: evm: security.selinux
Oct 08 15:51:53 localhost kernel: evm: security.SMACK64 (disabled)
Oct 08 15:51:53 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 08 15:51:53 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 08 15:51:53 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 08 15:51:53 localhost kernel: evm: security.apparmor (disabled)
Oct 08 15:51:53 localhost kernel: evm: security.ima
Oct 08 15:51:53 localhost kernel: evm: security.capability
Oct 08 15:51:53 localhost kernel: evm: HMAC attrs: 0x1
Oct 08 15:51:53 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 08 15:51:53 localhost kernel: Running certificate verification RSA selftest
Oct 08 15:51:53 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 08 15:51:53 localhost kernel: Running certificate verification ECDSA selftest
Oct 08 15:51:53 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 08 15:51:53 localhost kernel: clk: Disabling unused clocks
Oct 08 15:51:53 localhost kernel: Freeing unused decrypted memory: 2028K
Oct 08 15:51:53 localhost kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct 08 15:51:53 localhost kernel: Write protecting the kernel read-only data: 30720k
Oct 08 15:51:53 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 08 15:51:53 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 08 15:51:53 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 08 15:51:53 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 08 15:51:53 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 08 15:51:53 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct 08 15:51:53 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 08 15:51:53 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 08 15:51:53 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 08 15:51:53 localhost kernel: Run /init as init process
Oct 08 15:51:53 localhost kernel:   with arguments:
Oct 08 15:51:53 localhost kernel:     /init
Oct 08 15:51:53 localhost kernel:   with environment:
Oct 08 15:51:53 localhost kernel:     HOME=/
Oct 08 15:51:53 localhost kernel:     TERM=linux
Oct 08 15:51:53 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64
Oct 08 15:51:53 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 08 15:51:53 localhost systemd[1]: Detected virtualization kvm.
Oct 08 15:51:53 localhost systemd[1]: Detected architecture x86-64.
Oct 08 15:51:53 localhost systemd[1]: Running in initrd.
Oct 08 15:51:53 localhost systemd[1]: No hostname configured, using default hostname.
Oct 08 15:51:53 localhost systemd[1]: Hostname set to <localhost>.
Oct 08 15:51:53 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 08 15:51:53 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 08 15:51:53 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 08 15:51:53 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 08 15:51:53 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 08 15:51:53 localhost systemd[1]: Reached target Local File Systems.
Oct 08 15:51:53 localhost systemd[1]: Reached target Path Units.
Oct 08 15:51:53 localhost systemd[1]: Reached target Slice Units.
Oct 08 15:51:53 localhost systemd[1]: Reached target Swaps.
Oct 08 15:51:53 localhost systemd[1]: Reached target Timer Units.
Oct 08 15:51:53 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 08 15:51:53 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 08 15:51:53 localhost systemd[1]: Listening on Journal Socket.
Oct 08 15:51:53 localhost systemd[1]: Listening on udev Control Socket.
Oct 08 15:51:53 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 08 15:51:53 localhost systemd[1]: Reached target Socket Units.
Oct 08 15:51:53 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 08 15:51:53 localhost systemd[1]: Starting Journal Service...
Oct 08 15:51:53 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 08 15:51:53 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 08 15:51:53 localhost systemd[1]: Starting Create System Users...
Oct 08 15:51:53 localhost systemd[1]: Starting Setup Virtual Console...
Oct 08 15:51:53 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 08 15:51:53 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 08 15:51:53 localhost systemd-journald[309]: Journal started
Oct 08 15:51:53 localhost systemd-journald[309]: Runtime Journal (/run/log/journal/7ed7365c752d466f920a97a2ec0fb2e1) is 8.0M, max 153.5M, 145.5M free.
Oct 08 15:51:53 localhost systemd[1]: Started Journal Service.
Oct 08 15:51:53 localhost systemd-sysusers[314]: Creating group 'users' with GID 100.
Oct 08 15:51:53 localhost systemd-sysusers[314]: Creating group 'dbus' with GID 81.
Oct 08 15:51:53 localhost systemd-sysusers[314]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 08 15:51:53 localhost systemd[1]: Finished Create System Users.
Oct 08 15:51:53 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 08 15:51:53 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 08 15:51:53 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 08 15:51:53 localhost systemd[1]: Finished Setup Virtual Console.
Oct 08 15:51:53 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 08 15:51:53 localhost systemd[1]: Starting dracut cmdline hook...
Oct 08 15:51:53 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 08 15:51:53 localhost dracut-cmdline[330]: dracut-9 dracut-057-102.git20250818.el9
Oct 08 15:51:53 localhost dracut-cmdline[330]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 08 15:51:53 localhost systemd[1]: Finished dracut cmdline hook.
Oct 08 15:51:53 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 08 15:51:53 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 08 15:51:53 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 08 15:51:53 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 08 15:51:53 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 08 15:51:53 localhost kernel: RPC: Registered udp transport module.
Oct 08 15:51:53 localhost kernel: RPC: Registered tcp transport module.
Oct 08 15:51:53 localhost kernel: RPC: Registered tcp-with-tls transport module.
Oct 08 15:51:53 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 08 15:51:53 localhost rpc.statd[447]: Version 2.5.4 starting
Oct 08 15:51:53 localhost rpc.statd[447]: Initializing NSM state
Oct 08 15:51:53 localhost rpc.idmapd[452]: Setting log level to 0
Oct 08 15:51:53 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 08 15:51:53 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 08 15:51:53 localhost systemd-udevd[465]: Using default interface naming scheme 'rhel-9.0'.
Oct 08 15:51:53 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 08 15:51:53 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 08 15:51:53 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 08 15:51:53 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 08 15:51:54 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 08 15:51:54 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 08 15:51:54 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 08 15:51:54 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 08 15:51:54 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 08 15:51:54 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 08 15:51:54 localhost systemd[1]: Reached target Network.
Oct 08 15:51:54 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 08 15:51:54 localhost systemd[1]: Starting dracut initqueue hook...
Oct 08 15:51:54 localhost systemd-udevd[514]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 15:51:54 localhost systemd-udevd[492]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 15:51:54 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 08 15:51:54 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 08 15:51:54 localhost kernel:  vda: vda1
Oct 08 15:51:54 localhost kernel: libata version 3.00 loaded.
Oct 08 15:51:54 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Oct 08 15:51:54 localhost kernel: scsi host0: ata_piix
Oct 08 15:51:54 localhost kernel: scsi host1: ata_piix
Oct 08 15:51:54 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc180 irq 14 lpm-pol 0
Oct 08 15:51:54 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc188 irq 15 lpm-pol 0
Oct 08 15:51:54 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 08 15:51:54 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 08 15:51:54 localhost systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 08 15:51:54 localhost systemd[1]: Reached target Initrd Root Device.
Oct 08 15:51:54 localhost systemd[1]: Reached target System Initialization.
Oct 08 15:51:54 localhost systemd[1]: Reached target Basic System.
Oct 08 15:51:54 localhost kernel: ata1: found unknown device (class 0)
Oct 08 15:51:54 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 08 15:51:54 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 08 15:51:54 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 08 15:51:54 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 08 15:51:54 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 08 15:51:54 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 08 15:51:54 localhost systemd[1]: Finished dracut initqueue hook.
Oct 08 15:51:54 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 08 15:51:54 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 08 15:51:54 localhost systemd[1]: Reached target Remote File Systems.
Oct 08 15:51:54 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 08 15:51:54 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 08 15:51:54 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct 08 15:51:54 localhost systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Oct 08 15:51:54 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 08 15:51:54 localhost systemd[1]: Mounting /sysroot...
Oct 08 15:51:55 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 08 15:51:55 localhost kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct 08 15:51:55 localhost kernel: XFS (vda1): Ending clean mount
Oct 08 15:51:55 localhost systemd[1]: Mounted /sysroot.
Oct 08 15:51:55 localhost systemd[1]: Reached target Initrd Root File System.
Oct 08 15:51:55 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 08 15:51:55 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 08 15:51:55 localhost systemd[1]: Reached target Initrd File Systems.
Oct 08 15:51:55 localhost systemd[1]: Reached target Initrd Default Target.
Oct 08 15:51:55 localhost systemd[1]: Starting dracut mount hook...
Oct 08 15:51:55 localhost systemd[1]: Finished dracut mount hook.
Oct 08 15:51:55 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 08 15:51:55 localhost rpc.idmapd[452]: exiting on signal 15
Oct 08 15:51:55 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 08 15:51:55 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 08 15:51:55 localhost systemd[1]: Stopped target Network.
Oct 08 15:51:55 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 08 15:51:55 localhost systemd[1]: Stopped target Timer Units.
Oct 08 15:51:55 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 08 15:51:55 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 08 15:51:55 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 08 15:51:55 localhost systemd[1]: Stopped target Basic System.
Oct 08 15:51:55 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 08 15:51:55 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 08 15:51:55 localhost systemd[1]: Stopped target Path Units.
Oct 08 15:51:55 localhost systemd[1]: Stopped target Remote File Systems.
Oct 08 15:51:55 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 08 15:51:55 localhost systemd[1]: Stopped target Slice Units.
Oct 08 15:51:55 localhost systemd[1]: Stopped target Socket Units.
Oct 08 15:51:55 localhost systemd[1]: Stopped target System Initialization.
Oct 08 15:51:55 localhost systemd[1]: Stopped target Local File Systems.
Oct 08 15:51:55 localhost systemd[1]: Stopped target Swaps.
Oct 08 15:51:55 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Stopped dracut mount hook.
Oct 08 15:51:55 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 08 15:51:55 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 08 15:51:55 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 08 15:51:55 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 08 15:51:55 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 08 15:51:55 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 08 15:51:55 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 08 15:51:55 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 08 15:51:55 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 08 15:51:55 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 08 15:51:55 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 08 15:51:55 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 08 15:51:55 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Closed udev Control Socket.
Oct 08 15:51:55 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Closed udev Kernel Socket.
Oct 08 15:51:55 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 08 15:51:55 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 08 15:51:55 localhost systemd[1]: Starting Cleanup udev Database...
Oct 08 15:51:55 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 08 15:51:55 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 08 15:51:55 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Stopped Create System Users.
Oct 08 15:51:55 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 08 15:51:55 localhost systemd[1]: Finished Cleanup udev Database.
Oct 08 15:51:55 localhost systemd[1]: Reached target Switch Root.
Oct 08 15:51:55 localhost systemd[1]: Starting Switch Root...
Oct 08 15:51:55 localhost systemd[1]: Switching root.
Oct 08 15:51:55 localhost systemd-journald[309]: Journal stopped
Oct 08 15:51:56 compute-0 systemd-journald[309]: Received SIGTERM from PID 1 (systemd).
Oct 08 15:51:56 compute-0 kernel: audit: type=1404 audit(1759938715.636:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 08 15:51:56 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 08 15:51:56 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 08 15:51:56 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 08 15:51:56 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 08 15:51:56 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 08 15:51:56 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 08 15:51:56 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 08 15:51:56 compute-0 kernel: audit: type=1403 audit(1759938715.779:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 08 15:51:56 compute-0 systemd[1]: Successfully loaded SELinux policy in 147.870ms.
Oct 08 15:51:56 compute-0 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.845ms.
Oct 08 15:51:56 compute-0 systemd[1]: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 08 15:51:56 compute-0 systemd[1]: Detected virtualization kvm.
Oct 08 15:51:56 compute-0 systemd[1]: Detected architecture x86-64.
Oct 08 15:51:56 compute-0 systemd[1]: Hostname set to <compute-0>.
Oct 08 15:51:56 compute-0 systemd-rc-local-generator[641]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:51:56 compute-0 systemd-sysv-generator[644]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:51:56 compute-0 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Oct 08 15:51:56 compute-0 systemd[1]: Stopped Switch Root.
Oct 08 15:51:56 compute-0 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 08 15:51:56 compute-0 systemd[1]: Created slice Slice /system/getty.
Oct 08 15:51:56 compute-0 systemd[1]: Created slice Slice /system/serial-getty.
Oct 08 15:51:56 compute-0 systemd[1]: Created slice Slice /system/sshd-keygen.
Oct 08 15:51:56 compute-0 systemd[1]: Created slice User and Session Slice.
Oct 08 15:51:56 compute-0 systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 08 15:51:56 compute-0 systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Oct 08 15:51:56 compute-0 systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 08 15:51:56 compute-0 systemd[1]: Reached target Local Encrypted Volumes.
Oct 08 15:51:56 compute-0 systemd[1]: Stopped target Switch Root.
Oct 08 15:51:56 compute-0 systemd[1]: Stopped target Initrd File Systems.
Oct 08 15:51:56 compute-0 systemd[1]: Stopped target Initrd Root File System.
Oct 08 15:51:56 compute-0 systemd[1]: Reached target Local Integrity Protected Volumes.
Oct 08 15:51:56 compute-0 systemd[1]: Reached target Path Units.
Oct 08 15:51:56 compute-0 systemd[1]: Reached target rpc_pipefs.target.
Oct 08 15:51:56 compute-0 systemd[1]: Reached target Slice Units.
Oct 08 15:51:56 compute-0 systemd[1]: Reached target Local Verity Protected Volumes.
Oct 08 15:51:56 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 08 15:51:56 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 08 15:51:56 compute-0 systemd[1]: Listening on RPCbind Server Activation Socket.
Oct 08 15:51:56 compute-0 systemd[1]: Reached target RPC Port Mapper.
Oct 08 15:51:56 compute-0 systemd[1]: Listening on Process Core Dump Socket.
Oct 08 15:51:56 compute-0 systemd[1]: Listening on initctl Compatibility Named Pipe.
Oct 08 15:51:56 compute-0 systemd[1]: Listening on udev Control Socket.
Oct 08 15:51:56 compute-0 systemd[1]: Listening on udev Kernel Socket.
Oct 08 15:51:56 compute-0 systemd[1]: Mounting Huge Pages File System...
Oct 08 15:51:56 compute-0 systemd[1]: Mounting /dev/hugepages1G...
Oct 08 15:51:56 compute-0 systemd[1]: Mounting /dev/hugepages2M...
Oct 08 15:51:56 compute-0 systemd[1]: Mounting POSIX Message Queue File System...
Oct 08 15:51:56 compute-0 systemd[1]: Mounting Kernel Debug File System...
Oct 08 15:51:56 compute-0 systemd[1]: Mounting Kernel Trace File System...
Oct 08 15:51:56 compute-0 systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 08 15:51:56 compute-0 systemd[1]: Starting Create List of Static Device Nodes...
Oct 08 15:51:56 compute-0 systemd[1]: Load legacy module configuration was skipped because no trigger condition checks were met.
Oct 08 15:51:56 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 08 15:51:56 compute-0 systemd[1]: Starting Load Kernel Module configfs...
Oct 08 15:51:56 compute-0 systemd[1]: Starting Load Kernel Module drm...
Oct 08 15:51:56 compute-0 systemd[1]: Starting Load Kernel Module efi_pstore...
Oct 08 15:51:56 compute-0 systemd[1]: Starting Load Kernel Module fuse...
Oct 08 15:51:56 compute-0 systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 08 15:51:56 compute-0 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Oct 08 15:51:56 compute-0 systemd[1]: Stopped File System Check on Root Device.
Oct 08 15:51:56 compute-0 systemd[1]: Stopped Journal Service.
Oct 08 15:51:56 compute-0 systemd[1]: Starting Journal Service...
Oct 08 15:51:56 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 08 15:51:56 compute-0 kernel: fuse: init (API version 7.37)
Oct 08 15:51:56 compute-0 systemd[1]: Starting Generate network units from Kernel command line...
Oct 08 15:51:56 compute-0 systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 08 15:51:56 compute-0 systemd[1]: Starting Remount Root and Kernel File Systems...
Oct 08 15:51:56 compute-0 systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 08 15:51:56 compute-0 systemd[1]: Starting Coldplug All udev Devices...
Oct 08 15:51:56 compute-0 systemd[1]: Mounted Huge Pages File System.
Oct 08 15:51:56 compute-0 systemd[1]: Mounted /dev/hugepages1G.
Oct 08 15:51:56 compute-0 systemd[1]: Mounted /dev/hugepages2M.
Oct 08 15:51:56 compute-0 systemd-journald[690]: Journal started
Oct 08 15:51:56 compute-0 systemd-journald[690]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct 08 15:51:56 compute-0 systemd[1]: Queued start job for default target Multi-User System.
Oct 08 15:51:56 compute-0 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 08 15:51:56 compute-0 systemd[1]: Started Journal Service.
Oct 08 15:51:56 compute-0 systemd[1]: Mounted POSIX Message Queue File System.
Oct 08 15:51:56 compute-0 systemd[1]: Mounted Kernel Debug File System.
Oct 08 15:51:56 compute-0 systemd[1]: Mounted Kernel Trace File System.
Oct 08 15:51:56 compute-0 systemd[1]: Finished Create List of Static Device Nodes.
Oct 08 15:51:56 compute-0 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 08 15:51:56 compute-0 systemd[1]: Finished Load Kernel Module configfs.
Oct 08 15:51:56 compute-0 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 08 15:51:56 compute-0 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 08 15:51:56 compute-0 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 08 15:51:56 compute-0 systemd[1]: Finished Load Kernel Module fuse.
Oct 08 15:51:56 compute-0 kernel: ACPI: bus type drm_connector registered
Oct 08 15:51:56 compute-0 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 08 15:51:56 compute-0 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 08 15:51:56 compute-0 systemd[1]: Finished Load Kernel Module drm.
Oct 08 15:51:56 compute-0 systemd[1]: Finished Generate network units from Kernel command line.
Oct 08 15:51:56 compute-0 systemd[1]: Mounting FUSE Control File System...
Oct 08 15:51:56 compute-0 systemd[1]: Mounted FUSE Control File System.
Oct 08 15:51:56 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 08 15:51:56 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 08 15:51:56 compute-0 kernel: Bridge firewalling registered
Oct 08 15:51:56 compute-0 systemd-modules-load[691]: Inserted module 'br_netfilter'
Oct 08 15:51:56 compute-0 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 08 15:51:56 compute-0 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 08 15:51:56 compute-0 systemd[1]: Activating swap /swap...
Oct 08 15:51:56 compute-0 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 08 15:51:56 compute-0 systemd[1]: Rebuild Hardware Database was skipped because of an unmet condition check (ConditionNeedsUpdate=/etc).
Oct 08 15:51:56 compute-0 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 08 15:51:56 compute-0 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 08 15:51:56 compute-0 systemd[1]: Starting Load/Save OS Random Seed...
Oct 08 15:51:56 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 08 15:51:56 compute-0 systemd[1]: Create System Users was skipped because no trigger condition checks were met.
Oct 08 15:51:56 compute-0 systemd-modules-load[691]: Inserted module 'nf_conntrack'
Oct 08 15:51:56 compute-0 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 08 15:51:56 compute-0 systemd[1]: Activated swap /swap.
Oct 08 15:51:56 compute-0 systemd-journald[690]: Time spent on flushing to /var/log/journal/42833e1b511a402df82cb9cb2fc36491 is 8.242ms for 781 entries.
Oct 08 15:51:56 compute-0 systemd-journald[690]: System Journal (/var/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 4.0G, 3.9G free.
Oct 08 15:51:56 compute-0 systemd-journald[690]: Received client request to flush runtime journal.
Oct 08 15:51:56 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 08 15:51:56 compute-0 systemd[1]: Finished Load/Save OS Random Seed.
Oct 08 15:51:56 compute-0 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 08 15:51:56 compute-0 systemd[1]: Reached target Swaps.
Oct 08 15:51:56 compute-0 systemd[1]: Starting Apply Kernel Variables...
Oct 08 15:51:56 compute-0 systemd[1]: Finished Coldplug All udev Devices.
Oct 08 15:51:56 compute-0 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 08 15:51:56 compute-0 systemd[1]: Finished Apply Kernel Variables.
Oct 08 15:51:56 compute-0 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 08 15:51:56 compute-0 systemd[1]: Reached target Preparation for Local File Systems.
Oct 08 15:51:56 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 08 15:51:56 compute-0 systemd[1]: Reached target Local File Systems.
Oct 08 15:51:56 compute-0 systemd[1]: Starting Import network configuration from initramfs...
Oct 08 15:51:56 compute-0 systemd[1]: Rebuild Dynamic Linker Cache was skipped because no trigger condition checks were met.
Oct 08 15:51:56 compute-0 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 08 15:51:56 compute-0 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 08 15:51:56 compute-0 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 08 15:51:56 compute-0 systemd[1]: Starting Automatic Boot Loader Update...
Oct 08 15:51:56 compute-0 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 08 15:51:56 compute-0 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 08 15:51:56 compute-0 bootctl[708]: Couldn't find EFI system partition, skipping.
Oct 08 15:51:56 compute-0 systemd[1]: Finished Automatic Boot Loader Update.
Oct 08 15:51:56 compute-0 systemd[1]: Finished Import network configuration from initramfs.
Oct 08 15:51:56 compute-0 systemd-udevd[710]: Using default interface naming scheme 'rhel-9.0'.
Oct 08 15:51:56 compute-0 systemd[1]: Starting Create Volatile Files and Directories...
Oct 08 15:51:56 compute-0 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 08 15:51:56 compute-0 systemd[1]: Starting Load Kernel Module configfs...
Oct 08 15:51:56 compute-0 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 08 15:51:56 compute-0 systemd[1]: Finished Load Kernel Module configfs.
Oct 08 15:51:56 compute-0 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 08 15:51:56 compute-0 systemd-udevd[734]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 15:51:56 compute-0 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 08 15:51:56 compute-0 systemd[1]: Finished Create Volatile Files and Directories.
Oct 08 15:51:56 compute-0 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct 08 15:51:56 compute-0 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 08 15:51:56 compute-0 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 08 15:51:56 compute-0 systemd-udevd[745]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 15:51:56 compute-0 systemd[1]: Starting Security Auditing Service...
Oct 08 15:51:56 compute-0 systemd[1]: Starting RPC Bind...
Oct 08 15:51:56 compute-0 systemd[1]: Rebuild Journal Catalog was skipped because of an unmet condition check (ConditionNeedsUpdate=/var).
Oct 08 15:51:56 compute-0 systemd[1]: Update is Completed was skipped because no trigger condition checks were met.
Oct 08 15:51:56 compute-0 auditd[782]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 08 15:51:56 compute-0 auditd[782]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 08 15:51:56 compute-0 systemd[1]: Started RPC Bind.
Oct 08 15:51:56 compute-0 kernel: kvm_amd: TSC scaling supported
Oct 08 15:51:56 compute-0 kernel: kvm_amd: Nested Virtualization enabled
Oct 08 15:51:56 compute-0 kernel: kvm_amd: Nested Paging enabled
Oct 08 15:51:56 compute-0 kernel: kvm_amd: LBR virtualization supported
Oct 08 15:51:56 compute-0 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct 08 15:51:56 compute-0 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct 08 15:51:56 compute-0 kernel: Console: switching to colour dummy device 80x25
Oct 08 15:51:56 compute-0 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 08 15:51:56 compute-0 kernel: [drm] features: -context_init
Oct 08 15:51:56 compute-0 kernel: [drm] number of scanouts: 1
Oct 08 15:51:56 compute-0 kernel: [drm] number of cap sets: 0
Oct 08 15:51:57 compute-0 augenrules[787]: /sbin/augenrules: No change
Oct 08 15:51:57 compute-0 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct 08 15:51:57 compute-0 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 08 15:51:57 compute-0 kernel: Console: switching to colour frame buffer device 128x48
Oct 08 15:51:57 compute-0 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 08 15:51:57 compute-0 augenrules[807]: No rules
Oct 08 15:51:57 compute-0 augenrules[807]: enabled 1
Oct 08 15:51:57 compute-0 augenrules[807]: failure 1
Oct 08 15:51:57 compute-0 augenrules[807]: pid 782
Oct 08 15:51:57 compute-0 augenrules[807]: rate_limit 0
Oct 08 15:51:57 compute-0 augenrules[807]: backlog_limit 8192
Oct 08 15:51:57 compute-0 augenrules[807]: lost 0
Oct 08 15:51:57 compute-0 augenrules[807]: backlog 0
Oct 08 15:51:57 compute-0 augenrules[807]: backlog_wait_time 60000
Oct 08 15:51:57 compute-0 augenrules[807]: backlog_wait_time_actual 0
Oct 08 15:51:57 compute-0 augenrules[807]: enabled 1
Oct 08 15:51:57 compute-0 augenrules[807]: failure 1
Oct 08 15:51:57 compute-0 augenrules[807]: pid 782
Oct 08 15:51:57 compute-0 augenrules[807]: rate_limit 0
Oct 08 15:51:57 compute-0 augenrules[807]: backlog_limit 8192
Oct 08 15:51:57 compute-0 augenrules[807]: lost 0
Oct 08 15:51:57 compute-0 augenrules[807]: backlog 0
Oct 08 15:51:57 compute-0 augenrules[807]: backlog_wait_time 60000
Oct 08 15:51:57 compute-0 augenrules[807]: backlog_wait_time_actual 0
Oct 08 15:51:57 compute-0 augenrules[807]: enabled 1
Oct 08 15:51:57 compute-0 augenrules[807]: failure 1
Oct 08 15:51:57 compute-0 augenrules[807]: pid 782
Oct 08 15:51:57 compute-0 augenrules[807]: rate_limit 0
Oct 08 15:51:57 compute-0 augenrules[807]: backlog_limit 8192
Oct 08 15:51:57 compute-0 augenrules[807]: lost 0
Oct 08 15:51:57 compute-0 augenrules[807]: backlog 0
Oct 08 15:51:57 compute-0 augenrules[807]: backlog_wait_time 60000
Oct 08 15:51:57 compute-0 augenrules[807]: backlog_wait_time_actual 0
Oct 08 15:51:57 compute-0 systemd[1]: Started Security Auditing Service.
Oct 08 15:51:57 compute-0 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 08 15:51:57 compute-0 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 08 15:51:57 compute-0 systemd[1]: Reached target System Initialization.
Oct 08 15:51:57 compute-0 systemd[1]: Started dnf makecache --timer.
Oct 08 15:51:57 compute-0 systemd[1]: Started Daily rotation of log files.
Oct 08 15:51:57 compute-0 systemd[1]: Started Run system activity accounting tool every 10 minutes.
Oct 08 15:51:57 compute-0 systemd[1]: Started Generate summary of yesterday's process accounting.
Oct 08 15:51:57 compute-0 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 08 15:51:57 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 08 15:51:57 compute-0 systemd[1]: Reached target Timer Units.
Oct 08 15:51:57 compute-0 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 08 15:51:57 compute-0 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 08 15:51:57 compute-0 systemd[1]: Reached target Socket Units.
Oct 08 15:51:57 compute-0 systemd[1]: Starting D-Bus System Message Bus...
Oct 08 15:51:57 compute-0 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 08 15:51:57 compute-0 systemd[1]: Started D-Bus System Message Bus.
Oct 08 15:51:57 compute-0 systemd[1]: Reached target Basic System.
Oct 08 15:51:57 compute-0 dbus-broker-lau[838]: Ready
Oct 08 15:51:57 compute-0 systemd[1]: Starting NTP client/server...
Oct 08 15:51:57 compute-0 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 08 15:51:57 compute-0 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 08 15:51:57 compute-0 systemd[1]: Started irqbalance daemon.
Oct 08 15:51:57 compute-0 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 08 15:51:57 compute-0 systemd[1]: Starting Create netns directory...
Oct 08 15:51:57 compute-0 systemd[1]: Starting Netfilter Tables...
Oct 08 15:51:57 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 08 15:51:57 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 08 15:51:57 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 08 15:51:57 compute-0 systemd[1]: Reached target sshd-keygen.target.
Oct 08 15:51:57 compute-0 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 08 15:51:57 compute-0 systemd[1]: Reached target User and Group Name Lookups.
Oct 08 15:51:57 compute-0 systemd[1]: Starting Resets System Activity Logs...
Oct 08 15:51:57 compute-0 systemd[1]: Starting User Login Management...
Oct 08 15:51:57 compute-0 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 08 15:51:57 compute-0 systemd[1]: Finished Resets System Activity Logs.
Oct 08 15:51:57 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 08 15:51:57 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 08 15:51:57 compute-0 systemd[1]: Finished Create netns directory.
Oct 08 15:51:57 compute-0 chronyd[853]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 08 15:51:57 compute-0 chronyd[853]: Frequency -28.858 +/- 0.440 ppm read from /var/lib/chrony/drift
Oct 08 15:51:57 compute-0 chronyd[853]: Loaded seccomp filter (level 2)
Oct 08 15:51:57 compute-0 systemd[1]: Started NTP client/server.
Oct 08 15:51:57 compute-0 systemd-logind[847]: New seat seat0.
Oct 08 15:51:57 compute-0 systemd-logind[847]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 08 15:51:57 compute-0 systemd-logind[847]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 08 15:51:57 compute-0 systemd[1]: Started User Login Management.
Oct 08 15:51:57 compute-0 systemd[1]: Finished Netfilter Tables.
Oct 08 15:51:58 compute-0 cloud-init[874]: Cloud-init v. 24.4-7.el9 running 'init-local' at Wed, 08 Oct 2025 15:51:58 +0000. Up 6.95 seconds.
Oct 08 15:51:58 compute-0 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 08 15:51:58 compute-0 systemd[1]: Reached target Preparation for Network.
Oct 08 15:51:58 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Oct 08 15:51:58 compute-0 chown[876]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 08 15:51:58 compute-0 ovs-ctl[881]: Starting ovsdb-server [  OK  ]
Oct 08 15:51:58 compute-0 ovs-vsctl[930]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 08 15:51:58 compute-0 ovs-vsctl[940]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"f72d8dca-98f2-44ea-b875-cd9a8b583db6\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 08 15:51:58 compute-0 ovs-ctl[881]: Configuring Open vSwitch system IDs [  OK  ]
Oct 08 15:51:58 compute-0 ovs-vsctl[946]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 08 15:51:58 compute-0 ovs-ctl[881]: Enabling remote OVSDB managers [  OK  ]
Oct 08 15:51:58 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Oct 08 15:51:58 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 08 15:51:58 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 08 15:51:58 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 08 15:51:59 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Oct 08 15:51:59 compute-0 ovs-ctl[990]: Inserting openvswitch module [  OK  ]
Oct 08 15:51:59 compute-0 kernel: ovs-system: entered promiscuous mode
Oct 08 15:51:59 compute-0 kernel: Timeout policy base is empty
Oct 08 15:51:59 compute-0 systemd-udevd[742]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 15:51:59 compute-0 kernel: vlan22: entered promiscuous mode
Oct 08 15:51:59 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 08 15:51:59 compute-0 kernel: vlan20: entered promiscuous mode
Oct 08 15:51:59 compute-0 systemd-udevd[736]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 15:51:59 compute-0 kernel: vlan21: entered promiscuous mode
Oct 08 15:51:59 compute-0 ovs-ctl[959]: Starting ovs-vswitchd [  OK  ]
Oct 08 15:51:59 compute-0 ovs-vsctl[1031]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct 08 15:51:59 compute-0 ovs-ctl[959]: Enabling remote OVSDB managers [  OK  ]
Oct 08 15:51:59 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 08 15:51:59 compute-0 systemd[1]: Starting Open vSwitch...
Oct 08 15:51:59 compute-0 systemd[1]: Finished Open vSwitch.
Oct 08 15:51:59 compute-0 systemd[1]: Starting Network Manager...
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.5129] NetworkManager (version 1.54.1-1.el9) is starting... (boot:26ad96c6-a252-4eae-b5a9-fcad2bf5b882)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.5133] Read config: /etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.5280] manager[0x561dc15fb040]: monitoring kernel firmware directory '/lib/firmware'.
Oct 08 15:51:59 compute-0 systemd[1]: Starting Hostname Service...
Oct 08 15:51:59 compute-0 systemd[1]: Started Hostname Service.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6286] hostname: hostname: using hostnamed
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6289] hostname: static hostname changed from (none) to "compute-0"
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6295] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6410] manager[0x561dc15fb040]: rfkill: Wi-Fi hardware radio set enabled
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6411] manager[0x561dc15fb040]: rfkill: WWAN hardware radio set enabled
Oct 08 15:51:59 compute-0 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6479] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6507] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6508] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6509] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6509] manager: Networking is enabled by state file
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6518] settings: Loaded settings plugin: keyfile (internal)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6544] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6647] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6672] dhcp: init: Using DHCP client 'internal'
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6675] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6691] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6704] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6712] device (lo): Activation: starting connection 'lo' (547c6990-704e-4fad-b602-56fa040f4fab)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6722] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6725] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 15:51:59 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6756] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/3)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6759] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6775] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/4)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6777] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6793] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/5)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6794] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6811] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/6)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6816] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6843] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6844] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6852] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6854] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6862] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6864] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6873] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6876] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6883] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/11)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6885] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6893] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/12)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6895] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6905] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/13)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6908] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 15:51:59 compute-0 systemd[1]: Started Network Manager.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6921] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6929] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6932] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6933] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6934] device (eth0): carrier: link connected
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6936] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6937] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6939] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6940] device (eth1): carrier: link connected
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6945] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 08 15:51:59 compute-0 systemd[1]: Reached target Network.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6952] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 08 15:51:59 compute-0 kernel: vlan22: left promiscuous mode
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6986] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.6991] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7056] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7060] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7065] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7067] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7069] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7070] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7073] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7076] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7085] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7087] policy: auto-activating connection 'ci-private-network' (89974788-9759-5126-a1b0-ee1f04e80c80)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7089] policy: auto-activating connection 'vlan20-port' (3b569287-ef29-4f05-8329-32432120a24a)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7090] policy: auto-activating connection 'br-ex-port' (6f775df7-7452-4c7f-81d8-660834f4347b)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7091] policy: auto-activating connection 'vlan21-port' (8a540075-efdf-4afe-bf61-d759a4b00f46)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7092] policy: auto-activating connection 'eth1-port' (af5f1416-3966-4955-8b52-cba166da278c)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7092] policy: auto-activating connection 'vlan22-port' (caa439f9-e6e4-4742-98a1-82f241cb6779)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7093] policy: auto-activating connection 'br-ex-br' (f8d00a29-d3d4-4fa7-a1aa-dcc5b4e7af38)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7098] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7106] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7109] device (eth1): Activation: starting connection 'ci-private-network' (89974788-9759-5126-a1b0-ee1f04e80c80)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7111] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (3b569287-ef29-4f05-8329-32432120a24a)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7114] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (6f775df7-7452-4c7f-81d8-660834f4347b)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7116] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (8a540075-efdf-4afe-bf61-d759a4b00f46)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7119] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (af5f1416-3966-4955-8b52-cba166da278c)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7121] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (caa439f9-e6e4-4742-98a1-82f241cb6779)
Oct 08 15:51:59 compute-0 systemd[1]: Starting Network Manager Wait Online...
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7126] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (f8d00a29-d3d4-4fa7-a1aa-dcc5b4e7af38)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7132] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7135] manager: NetworkManager state is now CONNECTING
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7138] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7148] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7152] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7156] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7161] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7165] device (eth1): state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7176] device (eth1): disconnecting for new activation request.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7177] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7181] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7186] device (br-ex)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7194] device (br-ex)[Open vSwitch Port]: disconnecting for new activation request.
Oct 08 15:51:59 compute-0 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7196] device (eth1)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7208] device (eth1)[Open vSwitch Port]: disconnecting for new activation request.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7208] device (vlan20)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7218] device (vlan20)[Open vSwitch Port]: disconnecting for new activation request.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7219] device (vlan21)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7228] device (vlan21)[Open vSwitch Port]: disconnecting for new activation request.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7229] device (vlan22)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7236] device (vlan22)[Open vSwitch Port]: disconnecting for new activation request.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7236] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7239] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7240] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7242] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7254] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7258] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 08 15:51:59 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7270] device (eth1): disconnecting for new activation request.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7272] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 kernel: vlan21: left promiscuous mode
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7284] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 08 15:51:59 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7323] device (lo): Activation: successful, device activated.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7330] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7344] device (eth1): Activation: starting connection 'ci-private-network' (89974788-9759-5126-a1b0-ee1f04e80c80)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7347] device (br-ex)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7351] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (6f775df7-7452-4c7f-81d8-660834f4347b)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7354] device (eth1)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7359] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (af5f1416-3966-4955-8b52-cba166da278c)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7362] device (vlan20)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7367] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (3b569287-ef29-4f05-8329-32432120a24a)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7370] device (vlan21)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7375] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (8a540075-efdf-4afe-bf61-d759a4b00f46)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7377] device (vlan22)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7383] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (caa439f9-e6e4-4742-98a1-82f241cb6779)
Oct 08 15:51:59 compute-0 kernel: vlan20: left promiscuous mode
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7401] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7406] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7413] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7416] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7418] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7419] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7420] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7424] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7425] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7426] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7428] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7432] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7433] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7434] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7435] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7438] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7439] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7440] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7441] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7444] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7445] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7446] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7447] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7466] dhcp4 (eth0): state changed new lease, address=38.102.83.199
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7493] policy: auto-activating connection 'vlan21-if' (fb9505c7-4408-4328-8902-7d8a6c218336)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7495] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7499] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7504] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7508] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7512] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7516] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7518] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7521] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 08 15:51:59 compute-0 systemd[1]: Started GSSAPI Proxy Daemon.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7527] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7533] policy: auto-activating connection 'vlan20-if' (ed2ca21e-27c3-49ea-ad55-f37056d3a56e)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7535] policy: auto-activating connection 'vlan22-if' (72bd3ec8-6013-4315-bc14-bdfd55b5fb62)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7547] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 08 15:51:59 compute-0 kernel: virtio_net virtio5 eth1: left promiscuous mode
Oct 08 15:51:59 compute-0 kernel: ovs-system: left promiscuous mode
Oct 08 15:51:59 compute-0 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 08 15:51:59 compute-0 systemd[1]: Reached target NFS client services.
Oct 08 15:51:59 compute-0 systemd[1]: Reached target Preparation for Remote File Systems.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7604] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7608] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7618] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (fb9505c7-4408-4328-8902-7d8a6c218336)
Oct 08 15:51:59 compute-0 systemd[1]: Reached target Remote File Systems.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7620] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7635] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 15:51:59 compute-0 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7642] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7649] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (ed2ca21e-27c3-49ea-ad55-f37056d3a56e)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7653] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7664] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7675] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (72bd3ec8-6013-4315-bc14-bdfd55b5fb62)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7675] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7684] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7693] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7698] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7702] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 08 15:51:59 compute-0 kernel: ovs-system: entered promiscuous mode
Oct 08 15:51:59 compute-0 kernel: No such timeout policy "ovs_test_tp"
Oct 08 15:51:59 compute-0 systemd-udevd[1089]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 15:51:59 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7761] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7764] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7766] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7784] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7789] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7795] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7801] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7810] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7817] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7821] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7828] policy: auto-activating connection 'br-ex-if' (a26c7acd-b91d-4a0e-8f84-8d77d07b13df)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7829] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7837] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7844] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7854] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7861] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7864] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7866] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7870] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7875] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7886] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (a26c7acd-b91d-4a0e-8f84-8d77d07b13df)
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7887] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7895] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7901] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7909] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7917] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7928] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7936] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7948] device (eth0): Activation: successful, device activated.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7956] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 08 15:51:59 compute-0 kernel: vlan21: entered promiscuous mode
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7965] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7972] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7977] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7985] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7989] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.7995] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8002] device (eth1): Activation: successful, device activated.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8013] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8048] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8075] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 08 15:51:59 compute-0 kernel: vlan22: entered promiscuous mode
Oct 08 15:51:59 compute-0 systemd-udevd[1091]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8093] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8118] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8119] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8128] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 08 15:51:59 compute-0 kernel: vlan20: entered promiscuous mode
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8182] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8196] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8217] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8220] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8232] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8247] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8260] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8293] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8294] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8305] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 08 15:51:59 compute-0 kernel: br-ex: entered promiscuous mode
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8509] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8524] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8557] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8558] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8567] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 08 15:51:59 compute-0 NetworkManager[1034]: <info>  [1759938719.8576] manager: startup complete
Oct 08 15:51:59 compute-0 systemd[1]: Finished Network Manager Wait Online.
Oct 08 15:51:59 compute-0 systemd[1]: Starting Cloud-init: Network Stage...
Oct 08 15:51:59 compute-0 systemd[1]: Starting Authorization Manager...
Oct 08 15:51:59 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 08 15:52:00 compute-0 polkitd[1182]: Started polkitd version 0.117
Oct 08 15:52:00 compute-0 polkitd[1182]: Loading rules from directory /etc/polkit-1/rules.d
Oct 08 15:52:00 compute-0 polkitd[1182]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 08 15:52:00 compute-0 polkitd[1182]: Finished loading, compiling and executing 3 rules
Oct 08 15:52:00 compute-0 systemd[1]: Started Authorization Manager.
Oct 08 15:52:00 compute-0 polkitd[1182]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Oct 08 15:52:00 compute-0 cloud-init[1262]: Cloud-init v. 24.4-7.el9 running 'init' at Wed, 08 Oct 2025 15:52:00 +0000. Up 8.90 seconds.
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: +++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: |   Device   |   Up  |     Address     |      Mask     | Scope  |     Hw-Address    |
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: |   br-ex    |  True | 192.168.122.100 | 255.255.255.0 | global | fa:16:3e:7e:bd:78 |
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: |    eth0    |  True |  38.102.83.199  | 255.255.255.0 | global | fa:16:3e:5b:e1:90 |
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: |    eth1    |  True |        .        |       .       |   .    | fa:16:3e:7e:bd:78 |
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: |     lo     |  True |    127.0.0.1    |   255.0.0.0   |  host  |         .         |
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: |     lo     |  True |     ::1/128     |       .       |  host  |         .         |
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: | ovs-system | False |        .        |       .       |   .    | b2:ae:54:9d:1f:ac |
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: |   vlan20   |  True |   172.17.0.100  | 255.255.255.0 | global | 46:24:6f:0d:01:d2 |
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: |   vlan21   |  True |   172.18.0.100  | 255.255.255.0 | global | c6:f5:72:53:67:d4 |
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: |   vlan22   |  True |   172.19.0.100  | 255.255.255.0 | global | ba:80:64:c0:33:97 |
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: |   3   |    172.17.0.0   |    0.0.0.0    |  255.255.255.0  |   vlan20  |   U   |
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: |   4   |    172.18.0.0   |    0.0.0.0    |  255.255.255.0  |   vlan21  |   U   |
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: |   5   |    172.19.0.0   |    0.0.0.0    |  255.255.255.0  |   vlan22  |   U   |
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: |   6   |  192.168.122.0  |    0.0.0.0    |  255.255.255.0  |   br-ex   |   U   |
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: |   2   |  multicast  |    ::   |    eth1   |   U   |
Oct 08 15:52:00 compute-0 cloud-init[1262]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 08 15:52:00 compute-0 systemd[1]: Finished Cloud-init: Network Stage.
Oct 08 15:52:00 compute-0 systemd[1]: Reached target Cloud-config availability.
Oct 08 15:52:00 compute-0 systemd[1]: Reached target Network is Online.
Oct 08 15:52:00 compute-0 systemd[1]: Starting Cloud-init: Config Stage...
Oct 08 15:52:00 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Oct 08 15:52:00 compute-0 systemd[1]: Starting Notify NFS peers of a restart...
Oct 08 15:52:00 compute-0 systemd[1]: Starting System Logging Service...
Oct 08 15:52:00 compute-0 systemd[1]: Starting OpenSSH server daemon...
Oct 08 15:52:00 compute-0 sm-notify[1295]: Version 2.5.4 starting
Oct 08 15:52:00 compute-0 systemd[1]: Starting Permit User Sessions...
Oct 08 15:52:00 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Oct 08 15:52:00 compute-0 systemd[1]: Started Notify NFS peers of a restart.
Oct 08 15:52:00 compute-0 systemd[1]: Finished Permit User Sessions.
Oct 08 15:52:00 compute-0 sshd[1297]: Server listening on 0.0.0.0 port 22.
Oct 08 15:52:00 compute-0 sshd[1297]: Server listening on :: port 22.
Oct 08 15:52:00 compute-0 systemd[1]: Started Command Scheduler.
Oct 08 15:52:00 compute-0 systemd[1]: Started Getty on tty1.
Oct 08 15:52:00 compute-0 systemd[1]: Started Serial Getty on ttyS0.
Oct 08 15:52:00 compute-0 systemd[1]: Reached target Login Prompts.
Oct 08 15:52:00 compute-0 rsyslogd[1296]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1296" x-info="https://www.rsyslog.com"] start
Oct 08 15:52:00 compute-0 systemd[1]: Started OpenSSH server daemon.
Oct 08 15:52:00 compute-0 systemd[1]: Started System Logging Service.
Oct 08 15:52:00 compute-0 systemd[1]: Reached target Multi-User System.
Oct 08 15:52:00 compute-0 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 08 15:52:00 compute-0 crond[1299]: (CRON) STARTUP (1.5.7)
Oct 08 15:52:00 compute-0 crond[1299]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 08 15:52:00 compute-0 crond[1299]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 27% if used.)
Oct 08 15:52:00 compute-0 crond[1299]: (CRON) INFO (running with inotify support)
Oct 08 15:52:00 compute-0 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 08 15:52:00 compute-0 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 08 15:52:00 compute-0 rsyslogd[1296]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 15:52:00 compute-0 cloud-init[1309]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Wed, 08 Oct 2025 15:52:00 +0000. Up 9.53 seconds.
Oct 08 15:52:00 compute-0 systemd[1]: Finished Cloud-init: Config Stage.
Oct 08 15:52:00 compute-0 systemd[1]: Starting Cloud-init: Final Stage...
Oct 08 15:52:01 compute-0 cloud-init[1313]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Wed, 08 Oct 2025 15:52:01 +0000. Up 9.98 seconds.
Oct 08 15:52:01 compute-0 cloud-init[1313]: Cloud-init v. 24.4-7.el9 finished at Wed, 08 Oct 2025 15:52:01 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.05 seconds
Oct 08 15:52:01 compute-0 systemd[1]: Finished Cloud-init: Final Stage.
Oct 08 15:52:01 compute-0 systemd[1]: Reached target Cloud-init target.
Oct 08 15:52:01 compute-0 systemd[1]: Startup finished in 1.738s (kernel) + 2.639s (initrd) + 5.762s (userspace) = 10.139s.
Oct 08 15:52:08 compute-0 irqbalance[843]: Cannot change IRQ 35 affinity: Operation not permitted
Oct 08 15:52:08 compute-0 irqbalance[843]: IRQ 35 affinity is now unmanaged
Oct 08 15:52:08 compute-0 irqbalance[843]: Cannot change IRQ 33 affinity: Operation not permitted
Oct 08 15:52:08 compute-0 irqbalance[843]: IRQ 33 affinity is now unmanaged
Oct 08 15:52:08 compute-0 irqbalance[843]: Cannot change IRQ 38 affinity: Operation not permitted
Oct 08 15:52:08 compute-0 irqbalance[843]: IRQ 38 affinity is now unmanaged
Oct 08 15:52:08 compute-0 irqbalance[843]: Cannot change IRQ 36 affinity: Operation not permitted
Oct 08 15:52:08 compute-0 irqbalance[843]: IRQ 36 affinity is now unmanaged
Oct 08 15:52:08 compute-0 irqbalance[843]: Cannot change IRQ 34 affinity: Operation not permitted
Oct 08 15:52:08 compute-0 irqbalance[843]: IRQ 34 affinity is now unmanaged
Oct 08 15:52:08 compute-0 irqbalance[843]: Cannot change IRQ 37 affinity: Operation not permitted
Oct 08 15:52:08 compute-0 irqbalance[843]: IRQ 37 affinity is now unmanaged
Oct 08 15:52:10 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 08 15:52:29 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 08 15:52:44 compute-0 sshd-session[1320]: Accepted publickey for zuul from 192.168.122.30 port 40164 ssh2: ECDSA SHA256:ZIjHNHNxAuv0z7dTwV8SzPT4xe1+IFvqH/0VmHWdIl4
Oct 08 15:52:44 compute-0 systemd[1]: Created slice User Slice of UID 1000.
Oct 08 15:52:44 compute-0 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 08 15:52:44 compute-0 systemd-logind[847]: New session 1 of user zuul.
Oct 08 15:52:44 compute-0 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 08 15:52:44 compute-0 systemd[1]: Starting User Manager for UID 1000...
Oct 08 15:52:44 compute-0 systemd[1324]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 15:52:44 compute-0 systemd[1324]: Queued start job for default target Main User Target.
Oct 08 15:52:45 compute-0 systemd[1324]: Created slice User Application Slice.
Oct 08 15:52:44 compute-0 rsyslogd[1296]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 15:52:45 compute-0 systemd[1324]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 08 15:52:45 compute-0 systemd[1324]: Started Daily Cleanup of User's Temporary Directories.
Oct 08 15:52:45 compute-0 systemd[1324]: Reached target Paths.
Oct 08 15:52:45 compute-0 systemd[1324]: Reached target Timers.
Oct 08 15:52:45 compute-0 systemd[1324]: Starting D-Bus User Message Bus Socket...
Oct 08 15:52:45 compute-0 systemd[1324]: Starting Create User's Volatile Files and Directories...
Oct 08 15:52:45 compute-0 systemd[1324]: Listening on D-Bus User Message Bus Socket.
Oct 08 15:52:45 compute-0 systemd[1324]: Reached target Sockets.
Oct 08 15:52:45 compute-0 systemd[1324]: Finished Create User's Volatile Files and Directories.
Oct 08 15:52:45 compute-0 systemd[1324]: Reached target Basic System.
Oct 08 15:52:45 compute-0 systemd[1324]: Reached target Main User Target.
Oct 08 15:52:45 compute-0 systemd[1324]: Startup finished in 131ms.
Oct 08 15:52:45 compute-0 systemd[1]: Started User Manager for UID 1000.
Oct 08 15:52:45 compute-0 systemd[1]: Started Session 1 of User zuul.
Oct 08 15:52:45 compute-0 sshd-session[1320]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 15:52:45 compute-0 sudo[1367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hctflaygexftyuzettuwbicdbolifvwd ; cat /proc/sys/kernel/random/boot_id'
Oct 08 15:52:45 compute-0 sudo[1367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:52:45 compute-0 sudo[1367]: pam_unix(sudo:session): session closed for user root
Oct 08 15:52:45 compute-0 sudo[1396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkpbtelcfrotzvmjfnojdwaggvukrqpi ; whoami'
Oct 08 15:52:45 compute-0 sudo[1396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:52:45 compute-0 sudo[1396]: pam_unix(sudo:session): session closed for user root
Oct 08 15:52:45 compute-0 sudo[1548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfkzmdzegavbucaausgbegtkbshrmesq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938764.93226-230-233213873551276/AnsiballZ_file.py'
Oct 08 15:52:45 compute-0 sudo[1548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:52:46 compute-0 python3.9[1550]: ansible-ansible.builtin.file Invoked with path=/var/lib/openstack/reboot_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:52:46 compute-0 sudo[1548]: pam_unix(sudo:session): session closed for user root
Oct 08 15:52:46 compute-0 sshd-session[1340]: Connection closed by 192.168.122.30 port 40164
Oct 08 15:52:46 compute-0 sshd-session[1320]: pam_unix(sshd:session): session closed for user zuul
Oct 08 15:52:46 compute-0 systemd[1]: session-1.scope: Deactivated successfully.
Oct 08 15:52:46 compute-0 systemd-logind[847]: Session 1 logged out. Waiting for processes to exit.
Oct 08 15:52:46 compute-0 systemd-logind[847]: Removed session 1.
Oct 08 15:52:48 compute-0 irqbalance[843]: Cannot change IRQ 31 affinity: Operation not permitted
Oct 08 15:52:48 compute-0 irqbalance[843]: IRQ 31 affinity is now unmanaged
Oct 08 15:52:51 compute-0 sshd-session[1575]: Accepted publickey for zuul from 192.168.122.30 port 40170 ssh2: ECDSA SHA256:ZIjHNHNxAuv0z7dTwV8SzPT4xe1+IFvqH/0VmHWdIl4
Oct 08 15:52:52 compute-0 systemd-logind[847]: New session 3 of user zuul.
Oct 08 15:52:52 compute-0 systemd[1]: Started Session 3 of User zuul.
Oct 08 15:52:52 compute-0 sshd-session[1575]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 15:52:53 compute-0 python3.9[1728]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 15:52:54 compute-0 sudo[1882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixspgqbimaykdngjhofmppajephrrelf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938773.6564283-80-194129289803051/AnsiballZ_file.py'
Oct 08 15:52:54 compute-0 sudo[1882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:52:54 compute-0 python3.9[1884]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:52:54 compute-0 sudo[1882]: pam_unix(sudo:session): session closed for user root
Oct 08 15:52:55 compute-0 sudo[2034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwkenxtruiuhcqlbbpygclngxihnnvtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938774.5438783-80-124138254220479/AnsiballZ_file.py'
Oct 08 15:52:55 compute-0 sudo[2034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:52:55 compute-0 python3.9[2036]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:52:55 compute-0 sudo[2034]: pam_unix(sudo:session): session closed for user root
Oct 08 15:52:56 compute-0 sudo[2186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izbwrsfsytfgpymafuwrqylyqtdamito ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938775.356363-113-93519671464006/AnsiballZ_stat.py'
Oct 08 15:52:56 compute-0 sudo[2186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:52:56 compute-0 python3.9[2188]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:52:56 compute-0 sudo[2186]: pam_unix(sudo:session): session closed for user root
Oct 08 15:52:56 compute-0 sudo[2309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxlsvlclswsgjntyqqmwqdbwkjgdibnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938775.356363-113-93519671464006/AnsiballZ_copy.py'
Oct 08 15:52:56 compute-0 sudo[2309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:52:57 compute-0 python3.9[2311]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938775.356363-113-93519671464006/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=99f314fca7c9140c1167d1fc69ec1378a6525e13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:52:57 compute-0 sudo[2309]: pam_unix(sudo:session): session closed for user root
Oct 08 15:52:57 compute-0 sudo[2461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuattvxjyijbqkcxqeurqfyiuobtzmcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938776.711904-113-263859983781811/AnsiballZ_stat.py'
Oct 08 15:52:57 compute-0 sudo[2461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:52:57 compute-0 python3.9[2463]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:52:57 compute-0 sudo[2461]: pam_unix(sudo:session): session closed for user root
Oct 08 15:52:58 compute-0 sudo[2584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qivxlovgoypwgkczlkzskeezfbcdatdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938776.711904-113-263859983781811/AnsiballZ_copy.py'
Oct 08 15:52:58 compute-0 sudo[2584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:52:58 compute-0 python3.9[2586]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938776.711904-113-263859983781811/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ec5ddb71595e856d88c89ac721b041dda55a0f69 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:52:58 compute-0 sudo[2584]: pam_unix(sudo:session): session closed for user root
Oct 08 15:52:58 compute-0 sudo[2736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rloelywsdynrxmfyjkjkdvcuponahumf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938778.0099688-113-146578447776016/AnsiballZ_stat.py'
Oct 08 15:52:58 compute-0 sudo[2736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:52:59 compute-0 python3.9[2738]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:52:59 compute-0 sudo[2736]: pam_unix(sudo:session): session closed for user root
Oct 08 15:52:59 compute-0 sudo[2859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxcyrcnlovrrsrehbcycfjhalijuxfry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938778.0099688-113-146578447776016/AnsiballZ_copy.py'
Oct 08 15:52:59 compute-0 sudo[2859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:52:59 compute-0 python3.9[2861]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938778.0099688-113-146578447776016/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=7f4e463686c752ed37a4a46f126a1d41bcd5a322 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:52:59 compute-0 sudo[2859]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:00 compute-0 sudo[3011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfcwkidwymneahuizpwvdczelololrxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938779.433971-205-190976178480687/AnsiballZ_file.py'
Oct 08 15:53:00 compute-0 sudo[3011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:00 compute-0 python3.9[3013]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:53:00 compute-0 sudo[3011]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:00 compute-0 sudo[3163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skcitlbrjrfzraycfaqfhgvrxjdkeaxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938780.1300902-205-234303466459558/AnsiballZ_file.py'
Oct 08 15:53:00 compute-0 sudo[3163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:01 compute-0 python3.9[3165]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:53:01 compute-0 sudo[3163]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:01 compute-0 sudo[3315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsepbakxtnlxksfwmhmlkutvfpwiatnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938780.892156-235-11321787917577/AnsiballZ_stat.py'
Oct 08 15:53:01 compute-0 sudo[3315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:01 compute-0 python3.9[3317]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:01 compute-0 sudo[3315]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:02 compute-0 sudo[3438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnezhkalebihbbthpatmxjnesvpdlpba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938780.892156-235-11321787917577/AnsiballZ_copy.py'
Oct 08 15:53:02 compute-0 sudo[3438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:02 compute-0 python3.9[3440]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938780.892156-235-11321787917577/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=cc45a9940b997ed5e7b2c85bd6c7cfab93d597df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:02 compute-0 sudo[3438]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:03 compute-0 sudo[3590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcbiphbwzaxodonqhcgbizaouvwybbtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938782.2288954-235-158391416114210/AnsiballZ_stat.py'
Oct 08 15:53:03 compute-0 sudo[3590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:03 compute-0 python3.9[3592]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:03 compute-0 sudo[3590]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:03 compute-0 sudo[3713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dswckpvgtvjddzounymonssfikckqvbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938782.2288954-235-158391416114210/AnsiballZ_copy.py'
Oct 08 15:53:03 compute-0 sudo[3713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:03 compute-0 python3.9[3715]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938782.2288954-235-158391416114210/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=b349a0f59de9ddc3ece61089d64b5f63e1113521 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:03 compute-0 sudo[3713]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:04 compute-0 sudo[3865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cujhiqibwdfvucvnojxdsgdmywmpfxnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938783.5507107-235-41850203133260/AnsiballZ_stat.py'
Oct 08 15:53:04 compute-0 sudo[3865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:04 compute-0 python3.9[3867]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:04 compute-0 sudo[3865]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:04 compute-0 sudo[3988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlonxbuvrorgvxaatztddylrmrcysggs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938783.5507107-235-41850203133260/AnsiballZ_copy.py'
Oct 08 15:53:04 compute-0 sudo[3988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:05 compute-0 python3.9[3990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938783.5507107-235-41850203133260/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=bcbcb310c0c47ff49699d95ca0c67b9a2e2fceaf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:05 compute-0 sudo[3988]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:05 compute-0 sudo[4140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acuwikkqubtrhpydpydytiglewbrlmia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938784.82143-324-144451782757067/AnsiballZ_file.py'
Oct 08 15:53:05 compute-0 sudo[4140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:05 compute-0 python3.9[4142]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:53:05 compute-0 sudo[4140]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:06 compute-0 sudo[4292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkglmzxcfdhncymikkmokwqgtrcbhczb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938785.4826453-324-56948242129574/AnsiballZ_file.py'
Oct 08 15:53:06 compute-0 sudo[4292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:06 compute-0 python3.9[4294]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:53:06 compute-0 sudo[4292]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:07 compute-0 sudo[4444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oekmapkbapronjefezxlwiycbncftbuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938786.2038338-356-92650664583383/AnsiballZ_stat.py'
Oct 08 15:53:07 compute-0 sudo[4444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:07 compute-0 python3.9[4446]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:07 compute-0 sudo[4444]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:07 compute-0 sudo[4567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiijccdenwuyvrptgdhspnbyigbwxbrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938786.2038338-356-92650664583383/AnsiballZ_copy.py'
Oct 08 15:53:07 compute-0 sudo[4567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:07 compute-0 python3.9[4569]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938786.2038338-356-92650664583383/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=ec1cc25f34bd48ac4ede5e444a595a207087021d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:07 compute-0 sudo[4567]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:08 compute-0 sudo[4719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpsiaygqobthxgufmknwqmywmesjbmhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938787.4494414-356-51931965049194/AnsiballZ_stat.py'
Oct 08 15:53:08 compute-0 sudo[4719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:08 compute-0 python3.9[4721]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:08 compute-0 sudo[4719]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:08 compute-0 sudo[4842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuvmynnfoncyifxixudlmiogygxpukuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938787.4494414-356-51931965049194/AnsiballZ_copy.py'
Oct 08 15:53:08 compute-0 sudo[4842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:09 compute-0 python3.9[4844]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938787.4494414-356-51931965049194/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=6bade4c80fb2cf0d22fadf27f2aae1098ccc9ef8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:09 compute-0 sudo[4842]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:09 compute-0 sudo[4994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txuidxozwgwdiqhbzjsmpxaqlkzxcgqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938788.8090377-356-24634009678552/AnsiballZ_stat.py'
Oct 08 15:53:09 compute-0 sudo[4994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:09 compute-0 python3.9[4996]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:09 compute-0 sudo[4994]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:10 compute-0 sudo[5117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzymjruyporupnyuczdasrbudsgxotbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938788.8090377-356-24634009678552/AnsiballZ_copy.py'
Oct 08 15:53:10 compute-0 sudo[5117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:10 compute-0 python3.9[5119]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938788.8090377-356-24634009678552/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0b1f5fe68d2b1a07db0b41d960cce9a9b6e44f45 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:10 compute-0 sudo[5117]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:11 compute-0 sudo[5269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhxddcwllrwzgogztjjswloypptnwrmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938790.222047-448-144149080140276/AnsiballZ_file.py'
Oct 08 15:53:11 compute-0 sudo[5269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:11 compute-0 python3.9[5271]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:53:11 compute-0 sudo[5269]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:11 compute-0 sudo[5421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwcxonqaovygdtnzxyrfijhrtignsfda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938790.825534-448-135583727676003/AnsiballZ_file.py'
Oct 08 15:53:11 compute-0 sudo[5421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:11 compute-0 python3.9[5423]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:53:11 compute-0 sudo[5421]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:12 compute-0 sudo[5573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmmsbrupctpbugtqsenwhvzphprhvgev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938791.5012145-478-116940578084604/AnsiballZ_stat.py'
Oct 08 15:53:12 compute-0 sudo[5573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:12 compute-0 python3.9[5575]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:12 compute-0 sudo[5573]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:13 compute-0 sudo[5696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pictpxddbodxehhrnynhsvnqrikyfwut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938791.5012145-478-116940578084604/AnsiballZ_copy.py'
Oct 08 15:53:13 compute-0 sudo[5696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:13 compute-0 python3.9[5698]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938791.5012145-478-116940578084604/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=6659919a356fa866f7f5c36c5d3de67ab94c1aa2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:13 compute-0 sudo[5696]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:13 compute-0 sudo[5848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwvvjmeledbekrnmlpfpszfknevnffkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938792.886792-478-158775994818795/AnsiballZ_stat.py'
Oct 08 15:53:13 compute-0 sudo[5848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:13 compute-0 python3.9[5850]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:13 compute-0 sudo[5848]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:14 compute-0 sudo[5971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhqyqxndxgprredwuxovewtiyutlsewo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938792.886792-478-158775994818795/AnsiballZ_copy.py'
Oct 08 15:53:14 compute-0 sudo[5971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:14 compute-0 python3.9[5973]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938792.886792-478-158775994818795/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=6bade4c80fb2cf0d22fadf27f2aae1098ccc9ef8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:14 compute-0 sudo[5971]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:15 compute-0 sudo[6123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djwcttyfgekqtkvbqycwldarmbnwhnam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938794.1760092-478-133587942546381/AnsiballZ_stat.py'
Oct 08 15:53:15 compute-0 sudo[6123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:15 compute-0 python3.9[6125]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:15 compute-0 sudo[6123]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:15 compute-0 sudo[6246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bebzynhcvhplyfddsevjhqshaldmuray ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938794.1760092-478-133587942546381/AnsiballZ_copy.py'
Oct 08 15:53:15 compute-0 sudo[6246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:15 compute-0 python3.9[6248]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938794.1760092-478-133587942546381/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0ba6a9d6e77cd54d1afc156558330fb2f673aa7d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:15 compute-0 sudo[6246]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:16 compute-0 sudo[6398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbqoxinjllefjhsjxqspjngvikaxxkdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938796.145263-606-173541142489079/AnsiballZ_file.py'
Oct 08 15:53:16 compute-0 sudo[6398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:17 compute-0 python3.9[6400]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:53:17 compute-0 sudo[6398]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:17 compute-0 sudo[6550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpsdzyuxzkodzzsyiehtjcwkjhxfwnst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938796.8462021-623-96504853306385/AnsiballZ_stat.py'
Oct 08 15:53:17 compute-0 sudo[6550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:17 compute-0 python3.9[6552]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:17 compute-0 sudo[6550]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:18 compute-0 sudo[6673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crarlsqqttwwwngdwmtktyiapxebayto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938796.8462021-623-96504853306385/AnsiballZ_copy.py'
Oct 08 15:53:18 compute-0 sudo[6673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:18 compute-0 python3.9[6675]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938796.8462021-623-96504853306385/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5859f1930f9381e17115864c931890b588d29bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:18 compute-0 sudo[6673]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:19 compute-0 sudo[6825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfvsiixfjfzbpuprjahxffmsmhcfbepx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938798.2583907-656-143220544348931/AnsiballZ_file.py'
Oct 08 15:53:19 compute-0 sudo[6825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:19 compute-0 python3.9[6827]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:53:19 compute-0 sudo[6825]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:19 compute-0 sudo[6977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhxnouazzhjximcskstuxlhijpwybypv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938798.9436352-673-44078822526529/AnsiballZ_stat.py'
Oct 08 15:53:19 compute-0 sudo[6977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:19 compute-0 python3.9[6979]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:20 compute-0 sudo[6977]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:20 compute-0 sudo[7100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kachnhpzjnwaqjtxiexxutbkinpqqaqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938798.9436352-673-44078822526529/AnsiballZ_copy.py'
Oct 08 15:53:20 compute-0 sudo[7100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:20 compute-0 python3.9[7102]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938798.9436352-673-44078822526529/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5859f1930f9381e17115864c931890b588d29bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:20 compute-0 sudo[7100]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:21 compute-0 sudo[7252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvzzcowreziijzvwpxsdqjiykpdwebet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938800.3800344-707-89022268991522/AnsiballZ_file.py'
Oct 08 15:53:21 compute-0 sudo[7252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:21 compute-0 python3.9[7254]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:53:21 compute-0 sudo[7252]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:22 compute-0 sudo[7404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaqlnhmtfuljldwxoiheeucfccsrbngp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938801.1236362-725-26747260263483/AnsiballZ_stat.py'
Oct 08 15:53:22 compute-0 sudo[7404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:22 compute-0 python3.9[7406]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:22 compute-0 sudo[7404]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:22 compute-0 sudo[7527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqofovxqzbnebemdiklrsayjjnpnpeas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938801.1236362-725-26747260263483/AnsiballZ_copy.py'
Oct 08 15:53:22 compute-0 sudo[7527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:22 compute-0 python3.9[7529]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938801.1236362-725-26747260263483/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5859f1930f9381e17115864c931890b588d29bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:22 compute-0 sudo[7527]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:23 compute-0 sudo[7679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulofokolrpvbxhhjplplikupnllndnxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938802.5566626-759-168567432074246/AnsiballZ_file.py'
Oct 08 15:53:23 compute-0 sudo[7679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:23 compute-0 python3.9[7681]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:53:23 compute-0 sudo[7679]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:24 compute-0 sudo[7831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epbkrwbjuoepkrblsbuavahlusnutlzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938803.2549963-775-42444731117888/AnsiballZ_stat.py'
Oct 08 15:53:24 compute-0 sudo[7831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:24 compute-0 python3.9[7833]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:24 compute-0 sudo[7831]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:24 compute-0 sudo[7954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksyhspumgzmnweshdfpgrvrgiwfqcnvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938803.2549963-775-42444731117888/AnsiballZ_copy.py'
Oct 08 15:53:24 compute-0 sudo[7954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:24 compute-0 python3.9[7956]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938803.2549963-775-42444731117888/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5859f1930f9381e17115864c931890b588d29bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:24 compute-0 sudo[7954]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:25 compute-0 sudo[8106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpknyowknijcambznncwmwydmovzszna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938804.6059728-808-192789451011786/AnsiballZ_file.py'
Oct 08 15:53:25 compute-0 sudo[8106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:25 compute-0 python3.9[8108]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:53:25 compute-0 sudo[8106]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:26 compute-0 sudo[8258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejypvtzyyscusqxyasfjtiuefvchzczr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938805.3063211-823-91358814066621/AnsiballZ_stat.py'
Oct 08 15:53:26 compute-0 sudo[8258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:26 compute-0 python3.9[8260]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:26 compute-0 sudo[8258]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:26 compute-0 sudo[8381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idsnktxcmkfvauhdctfxkwcfrttyxald ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938805.3063211-823-91358814066621/AnsiballZ_copy.py'
Oct 08 15:53:26 compute-0 sudo[8381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:26 compute-0 python3.9[8383]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938805.3063211-823-91358814066621/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5859f1930f9381e17115864c931890b588d29bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:27 compute-0 sudo[8381]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:27 compute-0 sudo[8533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfdhfqvrqsfdtlbsejfcjeianvybbklq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938806.6656516-856-219513407395578/AnsiballZ_file.py'
Oct 08 15:53:27 compute-0 sudo[8533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:27 compute-0 python3.9[8535]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:53:27 compute-0 sudo[8533]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:28 compute-0 sudo[8685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdukfomejnpoqwjrpiwcqrypgzclxxvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938807.4823084-876-9958204224094/AnsiballZ_stat.py'
Oct 08 15:53:28 compute-0 sudo[8685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:28 compute-0 python3.9[8687]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:28 compute-0 sudo[8685]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:28 compute-0 sudo[8808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meiaojpheerlpjmnlzzyrrepmhiaqtsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938807.4823084-876-9958204224094/AnsiballZ_copy.py'
Oct 08 15:53:28 compute-0 sudo[8808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:29 compute-0 python3.9[8810]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938807.4823084-876-9958204224094/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5859f1930f9381e17115864c931890b588d29bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:29 compute-0 sudo[8808]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:29 compute-0 sudo[8960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nldqyhbdjumibkmqobobbqvcvujlkaaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938808.8977566-895-149970324909311/AnsiballZ_file.py'
Oct 08 15:53:29 compute-0 sudo[8960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:29 compute-0 python3.9[8962]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:53:29 compute-0 sudo[8960]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:30 compute-0 sudo[9112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skiegqtgisboowynwujjgtafrsvfghav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938809.737311-903-53472964280875/AnsiballZ_stat.py'
Oct 08 15:53:30 compute-0 sudo[9112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:30 compute-0 python3.9[9114]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:30 compute-0 sudo[9112]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:31 compute-0 sudo[9235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjxthmwxpwgrbfclajcootxvjlpwfidb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938809.737311-903-53472964280875/AnsiballZ_copy.py'
Oct 08 15:53:31 compute-0 sudo[9235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:31 compute-0 python3.9[9237]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938809.737311-903-53472964280875/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=5859f1930f9381e17115864c931890b588d29bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:31 compute-0 sudo[9235]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:31 compute-0 sshd-session[1578]: Connection closed by 192.168.122.30 port 40170
Oct 08 15:53:31 compute-0 sshd-session[1575]: pam_unix(sshd:session): session closed for user zuul
Oct 08 15:53:31 compute-0 systemd[1]: session-3.scope: Deactivated successfully.
Oct 08 15:53:31 compute-0 systemd[1]: session-3.scope: Consumed 31.448s CPU time.
Oct 08 15:53:31 compute-0 systemd-logind[847]: Session 3 logged out. Waiting for processes to exit.
Oct 08 15:53:31 compute-0 systemd-logind[847]: Removed session 3.
Oct 08 15:53:37 compute-0 sshd-session[9262]: Accepted publickey for zuul from 192.168.122.30 port 44034 ssh2: ECDSA SHA256:ZIjHNHNxAuv0z7dTwV8SzPT4xe1+IFvqH/0VmHWdIl4
Oct 08 15:53:37 compute-0 systemd-logind[847]: New session 4 of user zuul.
Oct 08 15:53:37 compute-0 systemd[1]: Started Session 4 of User zuul.
Oct 08 15:53:37 compute-0 sshd-session[9262]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 15:53:38 compute-0 python3.9[9415]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 15:53:39 compute-0 sudo[9569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xakyckfguokywxdrphkyrgizgvvtkoki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938818.6460903-48-227267827154462/AnsiballZ_file.py'
Oct 08 15:53:39 compute-0 sudo[9569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:39 compute-0 python3.9[9571]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:53:39 compute-0 sudo[9569]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:40 compute-0 sudo[9721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giwsthrzdsfjzkladtdqqzfzaedthjyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938819.4563234-48-30613759470473/AnsiballZ_file.py'
Oct 08 15:53:40 compute-0 sudo[9721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:40 compute-0 python3.9[9723]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:53:40 compute-0 sudo[9721]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:41 compute-0 python3.9[9873]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 15:53:41 compute-0 sudo[10023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lecclwtvbqjfvccncsoojffpjkwdunpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938820.9261982-94-274418773982547/AnsiballZ_seboolean.py'
Oct 08 15:53:41 compute-0 sudo[10023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:42 compute-0 python3.9[10025]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 08 15:53:43 compute-0 sudo[10023]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:44 compute-0 sudo[10179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhrxrestjbcmofvnetqfbujixwrfpdyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938823.5672684-114-171019633079929/AnsiballZ_setup.py'
Oct 08 15:53:44 compute-0 dbus-broker-launch[839]: avc:  op=load_policy lsm=selinux seqno=2 res=1
Oct 08 15:53:44 compute-0 sudo[10179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:44 compute-0 python3.9[10181]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 15:53:44 compute-0 sudo[10179]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:45 compute-0 sudo[10263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stuqxdbjejyxrdrdicohjxpastehzvrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938823.5672684-114-171019633079929/AnsiballZ_dnf.py'
Oct 08 15:53:45 compute-0 sudo[10263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:45 compute-0 python3.9[10265]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 15:53:47 compute-0 sudo[10263]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:48 compute-0 sudo[10416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuzxwftfpjsxxcqgtahcdtjsolaazotn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938827.1050975-138-97618459935636/AnsiballZ_systemd.py'
Oct 08 15:53:48 compute-0 sudo[10416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:48 compute-0 python3.9[10418]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 08 15:53:48 compute-0 sudo[10416]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:49 compute-0 sudo[10571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awjcymkhmdwqywjppzvetyiomnmkhghz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759938828.2196128-154-144962476589673/AnsiballZ_edpm_nftables_snippet.py'
Oct 08 15:53:49 compute-0 sudo[10571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:49 compute-0 python3[10573]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 08 15:53:49 compute-0 sudo[10571]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:49 compute-0 sudo[10723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmkjllzgxosohmpleltmepizkidpaqkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938829.1645381-172-242451196668671/AnsiballZ_file.py'
Oct 08 15:53:49 compute-0 sudo[10723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:50 compute-0 python3.9[10725]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:50 compute-0 sudo[10723]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:50 compute-0 sudo[10875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdhlnregxotslymyewicczughdgisist ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938829.86232-188-203857015161001/AnsiballZ_stat.py'
Oct 08 15:53:50 compute-0 sudo[10875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:51 compute-0 python3.9[10877]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:51 compute-0 sudo[10875]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:51 compute-0 sudo[10953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrwggiwkhomoxbtelrjupqzohclwwohg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938829.86232-188-203857015161001/AnsiballZ_file.py'
Oct 08 15:53:51 compute-0 sudo[10953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:51 compute-0 python3.9[10955]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:51 compute-0 sudo[10953]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:52 compute-0 sudo[11105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxxufgswsgrmknqpmkrgtflnqogupycr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938831.1816773-212-153787684757529/AnsiballZ_stat.py'
Oct 08 15:53:52 compute-0 sudo[11105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:52 compute-0 python3.9[11107]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:52 compute-0 sudo[11105]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:52 compute-0 sudo[11183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-georpkwuxorsjzmgdusaxjtrvchyigry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938831.1816773-212-153787684757529/AnsiballZ_file.py'
Oct 08 15:53:52 compute-0 sudo[11183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:52 compute-0 python3.9[11185]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.lxh9x2vf recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:52 compute-0 sudo[11183]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:53 compute-0 sudo[11335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozvlfuvvqwrmsjrdkuhxczvquzuuexoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938832.4611273-236-276204390262189/AnsiballZ_stat.py'
Oct 08 15:53:53 compute-0 sudo[11335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:53 compute-0 python3.9[11337]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:53 compute-0 sudo[11335]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:53 compute-0 sudo[11413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rajyvevmlaacvphldbwmxvacxcanpziu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938832.4611273-236-276204390262189/AnsiballZ_file.py'
Oct 08 15:53:53 compute-0 sudo[11413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:53 compute-0 python3.9[11415]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:54 compute-0 sudo[11413]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:54 compute-0 sudo[11565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbmhrforxrgusgfdpdnnahqenznqbfox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938833.6825008-262-112074050773079/AnsiballZ_command.py'
Oct 08 15:53:54 compute-0 sudo[11565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:54 compute-0 python3.9[11567]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 15:53:54 compute-0 sudo[11565]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:55 compute-0 sudo[11718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liqgxubzaxyanqyhkkzhfvinfwopjsbi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759938834.4910328-278-189788843994005/AnsiballZ_edpm_nftables_from_files.py'
Oct 08 15:53:55 compute-0 sudo[11718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:55 compute-0 python3[11720]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 08 15:53:55 compute-0 sudo[11718]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:56 compute-0 sudo[11870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozymkypbrnfwxcresfumcuyvxytazgxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938835.3593557-294-279918334578553/AnsiballZ_stat.py'
Oct 08 15:53:56 compute-0 sudo[11870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:56 compute-0 python3.9[11872]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:56 compute-0 sudo[11870]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:56 compute-0 sudo[11995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imgqqsofkfctkfchjwjemamrciufxvin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938835.3593557-294-279918334578553/AnsiballZ_copy.py'
Oct 08 15:53:56 compute-0 sudo[11995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:57 compute-0 python3.9[11997]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938835.3593557-294-279918334578553/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:57 compute-0 sudo[11995]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:57 compute-0 sudo[12147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msfboxmtoezbcfeahtlenrskzznsbnmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938836.7300913-324-58214227404173/AnsiballZ_stat.py'
Oct 08 15:53:57 compute-0 sudo[12147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:57 compute-0 python3.9[12149]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:57 compute-0 sudo[12147]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:58 compute-0 sudo[12272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkwmgqffvewuziibhzoduymydfhodisx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938836.7300913-324-58214227404173/AnsiballZ_copy.py'
Oct 08 15:53:58 compute-0 sudo[12272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:58 compute-0 python3.9[12274]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938836.7300913-324-58214227404173/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:58 compute-0 sudo[12272]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:58 compute-0 sudo[12424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rusqajwawhpuayyvpfonfuwoifdwmeei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938838.095209-354-280128412682145/AnsiballZ_stat.py'
Oct 08 15:53:58 compute-0 sudo[12424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:59 compute-0 python3.9[12426]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:53:59 compute-0 sudo[12424]: pam_unix(sudo:session): session closed for user root
Oct 08 15:53:59 compute-0 sudo[12549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nllfrqdvhliurrpverawprvnmqdjgvaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938838.095209-354-280128412682145/AnsiballZ_copy.py'
Oct 08 15:53:59 compute-0 sudo[12549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:53:59 compute-0 python3.9[12551]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938838.095209-354-280128412682145/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:53:59 compute-0 sudo[12549]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:00 compute-0 sudo[12701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzdvptwqvippmycpwitrxlxqxzyxmztb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938839.3791344-384-153100490563234/AnsiballZ_stat.py'
Oct 08 15:54:00 compute-0 sudo[12701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:00 compute-0 python3.9[12703]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:54:00 compute-0 sudo[12701]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:00 compute-0 sudo[12826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxjsqwsyaxtmavpsekwevnjiahwgdqqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938839.3791344-384-153100490563234/AnsiballZ_copy.py'
Oct 08 15:54:00 compute-0 sudo[12826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:01 compute-0 python3.9[12828]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938839.3791344-384-153100490563234/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:54:01 compute-0 sudo[12826]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:01 compute-0 sudo[12978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fitjygturbnuvdavuccyihreungbpimc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938840.6743429-414-214212005228037/AnsiballZ_stat.py'
Oct 08 15:54:01 compute-0 sudo[12978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:01 compute-0 python3.9[12980]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:54:01 compute-0 sudo[12978]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:02 compute-0 sudo[13103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amqijdxdusscmhuvbmffxqhcdxshsorb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938840.6743429-414-214212005228037/AnsiballZ_copy.py'
Oct 08 15:54:02 compute-0 sudo[13103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:02 compute-0 python3.9[13105]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759938840.6743429-414-214212005228037/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:54:02 compute-0 sudo[13103]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:03 compute-0 sudo[13255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbmzvmiglisormxdkirgsjdqhucyxjyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938842.1880453-444-72050691962661/AnsiballZ_file.py'
Oct 08 15:54:03 compute-0 sudo[13255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:03 compute-0 python3.9[13257]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:54:03 compute-0 sudo[13255]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:03 compute-0 sudo[13407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbzwruakvdhqbgtqjqiriehpwhphwgli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938842.8822234-460-281389266907299/AnsiballZ_command.py'
Oct 08 15:54:03 compute-0 sudo[13407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:03 compute-0 python3.9[13409]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 15:54:04 compute-0 sudo[13407]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:04 compute-0 sudo[13562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvqwczbazefcprbgrdaeejpkynupkria ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938843.618583-476-170293973080652/AnsiballZ_blockinfile.py'
Oct 08 15:54:04 compute-0 sudo[13562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:04 compute-0 python3.9[13564]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:54:04 compute-0 sudo[13562]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:05 compute-0 sudo[13714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhrsegvqpwejmkbkyycdkgeqhtikksck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938844.5375278-494-153264672976387/AnsiballZ_command.py'
Oct 08 15:54:05 compute-0 sudo[13714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:05 compute-0 python3.9[13716]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 15:54:05 compute-0 sudo[13714]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:06 compute-0 sudo[13867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpofrcjdgdbzvousynionguddwblrmop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938845.2853255-510-255397767192337/AnsiballZ_stat.py'
Oct 08 15:54:06 compute-0 sudo[13867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:06 compute-0 python3.9[13869]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 15:54:06 compute-0 sudo[13867]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:06 compute-0 sudo[14021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjkoekruyzmpksuhvatsyxxbyvttxqsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938846.042283-526-24702282251770/AnsiballZ_command.py'
Oct 08 15:54:06 compute-0 sudo[14021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:07 compute-0 python3.9[14023]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 15:54:07 compute-0 sudo[14021]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:07 compute-0 sudo[14176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mffdvhqgslzghdpjcdhqplvuisfphxdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938846.6976933-542-279162921241122/AnsiballZ_file.py'
Oct 08 15:54:07 compute-0 sudo[14176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:07 compute-0 python3.9[14178]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:54:07 compute-0 sudo[14176]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:08 compute-0 python3.9[14328]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 15:54:09 compute-0 sudo[14479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrolkpdujwfwfleknykwhoeqwdaggfko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938849.0372386-622-196765545757674/AnsiballZ_command.py'
Oct 08 15:54:09 compute-0 sudo[14479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:10 compute-0 python3.9[14481]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 15:54:10 compute-0 ovs-vsctl[14482]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 08 15:54:10 compute-0 sudo[14479]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:10 compute-0 sudo[14632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwtpykmjwtneirgieinqioacwumkhmcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938849.8252459-640-196465500582485/AnsiballZ_command.py'
Oct 08 15:54:10 compute-0 sudo[14632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:10 compute-0 python3.9[14634]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 15:54:10 compute-0 sudo[14632]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:11 compute-0 sudo[14787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryoqwtgzlzrbsbsdgcpuakjfqzpdpjyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938850.5289152-656-132602802611627/AnsiballZ_command.py'
Oct 08 15:54:11 compute-0 sudo[14787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:11 compute-0 python3.9[14789]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 15:54:11 compute-0 ovs-vsctl[14790]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 08 15:54:11 compute-0 sudo[14787]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:12 compute-0 python3.9[14940]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 15:54:12 compute-0 sudo[15092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyizouoptpphhedlcvbtahmvjiogzprq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938851.8987577-690-228513906002722/AnsiballZ_file.py'
Oct 08 15:54:12 compute-0 sudo[15092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:12 compute-0 python3.9[15094]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:54:12 compute-0 sudo[15092]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:13 compute-0 sudo[15244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvlzvxysnjfierrtycapuyxlbtdfkkuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938852.6044607-706-106128233243184/AnsiballZ_stat.py'
Oct 08 15:54:13 compute-0 sudo[15244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:13 compute-0 python3.9[15246]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:54:13 compute-0 sudo[15244]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:13 compute-0 sudo[15322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnmxrfofckekgbglnwzjcnuucsngclfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938852.6044607-706-106128233243184/AnsiballZ_file.py'
Oct 08 15:54:13 compute-0 sudo[15322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:14 compute-0 python3.9[15324]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:54:14 compute-0 sudo[15322]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:14 compute-0 chronyd[853]: Selected source 162.159.200.1 (pool.ntp.org)
Oct 08 15:54:14 compute-0 sudo[15474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnmefxjmnqnshgmletwdytgiezkdpnvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938853.8024156-706-261021101148708/AnsiballZ_stat.py'
Oct 08 15:54:14 compute-0 sudo[15474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:14 compute-0 python3.9[15476]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:54:14 compute-0 sudo[15474]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:15 compute-0 sudo[15552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zukolfszlmltxxwuhhighjdlnxdcoqxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938853.8024156-706-261021101148708/AnsiballZ_file.py'
Oct 08 15:54:15 compute-0 sudo[15552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:15 compute-0 python3.9[15554]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:54:15 compute-0 sudo[15552]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:15 compute-0 sudo[15704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayxvjmtzofdfjxuapsxyznmsbkazkqrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938855.113509-752-218541360552104/AnsiballZ_file.py'
Oct 08 15:54:15 compute-0 sudo[15704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:16 compute-0 python3.9[15706]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:54:16 compute-0 sudo[15704]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:16 compute-0 sudo[15856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgjndlicrucqniayyxkznpjeucgclfsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938855.8342533-768-6323800776381/AnsiballZ_stat.py'
Oct 08 15:54:16 compute-0 sudo[15856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:16 compute-0 python3.9[15858]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:54:16 compute-0 sudo[15856]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:16 compute-0 sudo[15934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inkckdyaxgdylwmoqynmzmimjqgfvntq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938855.8342533-768-6323800776381/AnsiballZ_file.py'
Oct 08 15:54:16 compute-0 sudo[15934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:17 compute-0 python3.9[15936]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:54:17 compute-0 sudo[15934]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:17 compute-0 sudo[16086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhizhqqwonnqeqzaattessjuwjtpsvrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938857.0915866-792-115991440784319/AnsiballZ_stat.py'
Oct 08 15:54:17 compute-0 sudo[16086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:17 compute-0 python3.9[16088]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:54:17 compute-0 sudo[16086]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:18 compute-0 sudo[16164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sysiipmsghjqsnviyujsbgbokyvtupxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938857.0915866-792-115991440784319/AnsiballZ_file.py'
Oct 08 15:54:18 compute-0 sudo[16164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:18 compute-0 python3.9[16166]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:54:18 compute-0 sudo[16164]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:18 compute-0 sudo[16316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttjannbwurlwwifqtfyozsavpajkpvmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938858.4512799-816-117411398073387/AnsiballZ_systemd.py'
Oct 08 15:54:18 compute-0 sudo[16316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:19 compute-0 python3.9[16318]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 15:54:19 compute-0 systemd[1]: Reloading.
Oct 08 15:54:19 compute-0 systemd-sysv-generator[16346]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:54:19 compute-0 systemd-rc-local-generator[16342]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:54:19 compute-0 sudo[16316]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:19 compute-0 sudo[16506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysobqsgxtgagybzktiohfstbzsyanlil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938859.5809293-832-198666814068651/AnsiballZ_stat.py'
Oct 08 15:54:19 compute-0 sudo[16506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:20 compute-0 python3.9[16508]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:54:20 compute-0 sudo[16506]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:20 compute-0 sudo[16584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghoalewlhkkczneaegympnprfuzdpssa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938859.5809293-832-198666814068651/AnsiballZ_file.py'
Oct 08 15:54:20 compute-0 sudo[16584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:20 compute-0 python3.9[16586]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:54:20 compute-0 sudo[16584]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:20 compute-0 sudo[16736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyntudoirhzghdgfthdsmgyjiykqhnni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938860.685768-856-112408596844922/AnsiballZ_stat.py'
Oct 08 15:54:21 compute-0 sudo[16736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:21 compute-0 python3.9[16738]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:54:21 compute-0 sudo[16736]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:21 compute-0 sudo[16814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sshplygnwiccpkwhikgwuxmamxhsrqnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938860.685768-856-112408596844922/AnsiballZ_file.py'
Oct 08 15:54:21 compute-0 sudo[16814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:21 compute-0 python3.9[16816]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:54:21 compute-0 sudo[16814]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:22 compute-0 sudo[16966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwymrdhcasuvcpbiaznglwxmgisloeln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938861.8523679-880-251071380520732/AnsiballZ_systemd.py'
Oct 08 15:54:22 compute-0 sudo[16966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:22 compute-0 python3.9[16968]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 15:54:22 compute-0 systemd[1]: Reloading.
Oct 08 15:54:22 compute-0 systemd-sysv-generator[16999]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:54:22 compute-0 systemd-rc-local-generator[16995]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:54:22 compute-0 systemd[1]: Starting Create netns directory...
Oct 08 15:54:22 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 08 15:54:22 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 08 15:54:22 compute-0 systemd[1]: Finished Create netns directory.
Oct 08 15:54:22 compute-0 sudo[16966]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:23 compute-0 sudo[17158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwyjvympncbyyjwflrsvuwjctclpcedg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938863.281623-900-68738027017089/AnsiballZ_file.py'
Oct 08 15:54:23 compute-0 sudo[17158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:24 compute-0 python3.9[17160]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:54:24 compute-0 sudo[17158]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:24 compute-0 sudo[17310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhwqvamykctgfmwctbfpimtzpdtrksff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938864.2279158-916-45779265689786/AnsiballZ_stat.py'
Oct 08 15:54:24 compute-0 sudo[17310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:24 compute-0 python3.9[17312]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:54:24 compute-0 sudo[17310]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:25 compute-0 sudo[17433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwfasolorritdiqlmwklcsuqzatcaugd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938864.2279158-916-45779265689786/AnsiballZ_copy.py'
Oct 08 15:54:25 compute-0 sudo[17433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:25 compute-0 python3.9[17435]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759938864.2279158-916-45779265689786/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:54:25 compute-0 sudo[17433]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:26 compute-0 sudo[17585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecgezkfouzxwmqweugtzpmtpkjujkmqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938865.7390766-950-241592418393995/AnsiballZ_file.py'
Oct 08 15:54:26 compute-0 sudo[17585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:26 compute-0 python3.9[17587]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:54:26 compute-0 sudo[17585]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:26 compute-0 sudo[17737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsvcadksvitzzbkdxhovsizdgvilbwct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938866.4962387-966-140615844965422/AnsiballZ_stat.py'
Oct 08 15:54:26 compute-0 sudo[17737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:27 compute-0 python3.9[17739]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:54:27 compute-0 sudo[17737]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:27 compute-0 sudo[17860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkbwhyxcbtptxyhzfmxugqubqqoxvfse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938866.4962387-966-140615844965422/AnsiballZ_copy.py'
Oct 08 15:54:27 compute-0 sudo[17860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:27 compute-0 python3.9[17862]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759938866.4962387-966-140615844965422/.source.json _original_basename=.qycq6az4 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:54:27 compute-0 sudo[17860]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:28 compute-0 sudo[18012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltgmpasvokxwbjyecahtofclauwuruie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938867.8219225-996-201647222488009/AnsiballZ_file.py'
Oct 08 15:54:28 compute-0 sudo[18012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:28 compute-0 python3.9[18014]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:54:28 compute-0 sudo[18012]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:28 compute-0 sudo[18164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxivrnqswbkjvoshwzthninukdfclavq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938868.6668289-1012-68090407178410/AnsiballZ_stat.py'
Oct 08 15:54:28 compute-0 sudo[18164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:29 compute-0 sudo[18164]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:29 compute-0 sudo[18287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyhvsvfemgjrapjvfjiirbanpitvgqzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938868.6668289-1012-68090407178410/AnsiballZ_copy.py'
Oct 08 15:54:29 compute-0 sudo[18287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:29 compute-0 sudo[18287]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:30 compute-0 sudo[18439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fftitlesokitamglvpfxcrpfpedjoyvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938870.154566-1046-197006941890698/AnsiballZ_container_config_data.py'
Oct 08 15:54:30 compute-0 sudo[18439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:30 compute-0 python3.9[18441]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 08 15:54:30 compute-0 sudo[18439]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:31 compute-0 sudo[18591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfnmhmzepuepkmyihiuqduejkxzlxxsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938871.0885644-1064-258322033987011/AnsiballZ_container_config_hash.py'
Oct 08 15:54:31 compute-0 sudo[18591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:31 compute-0 python3.9[18593]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 08 15:54:31 compute-0 sudo[18591]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:32 compute-0 sudo[18743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlxzrnmnxsireuwfputkdsqgfywusstz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938872.0057986-1082-209987476731493/AnsiballZ_podman_container_info.py'
Oct 08 15:54:32 compute-0 sudo[18743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:32 compute-0 python3.9[18745]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 08 15:54:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2046005139-merged.mount: Deactivated successfully.
Oct 08 15:54:32 compute-0 kernel: evm: overlay not supported
Oct 08 15:54:32 compute-0 podman[18746]: 2025-10-08 15:54:32.810066169 +0000 UTC m=+0.109112260 system refresh
Oct 08 15:54:32 compute-0 sudo[18743]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 08 15:54:34 compute-0 sudo[18910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vosuvndxipuincudcduzgvocxnovjyao ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759938873.6344883-1108-74031462687740/AnsiballZ_edpm_container_manage.py'
Oct 08 15:54:34 compute-0 sudo[18910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:34 compute-0 python3[18912]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 08 15:54:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 08 15:54:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 08 15:54:34 compute-0 podman[18951]: 2025-10-08 15:54:34.672284555 +0000 UTC m=+0.080089562 container create de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 08 15:54:34 compute-0 podman[18951]: 2025-10-08 15:54:34.621193431 +0000 UTC m=+0.028998478 image pull 46b8ffb2caad3de26f2d6247cc53bee77c3d2c924a3f0a044d9bfeaedcc1285f 38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest
Oct 08 15:54:34 compute-0 python3[18912]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z 38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest
Oct 08 15:54:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 08 15:54:34 compute-0 sudo[18910]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:35 compute-0 sudo[19139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfhemaqtpslyhojoltptmagykqjkrlds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938874.9691746-1124-8796303106793/AnsiballZ_stat.py'
Oct 08 15:54:35 compute-0 sudo[19139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:35 compute-0 python3.9[19141]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 15:54:35 compute-0 sudo[19139]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:36 compute-0 sudo[19293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qezxeqmrwicenpzzykjxcrmwxtfdtxqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938875.7854168-1142-247694421987232/AnsiballZ_file.py'
Oct 08 15:54:36 compute-0 sudo[19293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:36 compute-0 python3.9[19295]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:54:36 compute-0 sudo[19293]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:36 compute-0 sudo[19369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mksuagiostuquvltpkjjgvmppmndvlmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938875.7854168-1142-247694421987232/AnsiballZ_stat.py'
Oct 08 15:54:36 compute-0 sudo[19369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:36 compute-0 python3.9[19371]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 15:54:36 compute-0 sudo[19369]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:37 compute-0 sudo[19520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-synermiaceudhfifrqshqvfmbupyqvmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938876.8718867-1142-131703463081863/AnsiballZ_copy.py'
Oct 08 15:54:37 compute-0 sudo[19520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:37 compute-0 python3.9[19522]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759938876.8718867-1142-131703463081863/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:54:37 compute-0 sudo[19520]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:37 compute-0 sudo[19596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpoedncwalomisjknxblpgvvfrlfyfjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938876.8718867-1142-131703463081863/AnsiballZ_systemd.py'
Oct 08 15:54:37 compute-0 sudo[19596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:38 compute-0 python3.9[19598]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 15:54:38 compute-0 systemd[1]: Reloading.
Oct 08 15:54:38 compute-0 systemd-rc-local-generator[19625]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:54:38 compute-0 systemd-sysv-generator[19629]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:54:38 compute-0 sudo[19596]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:38 compute-0 sudo[19708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bclngypverhxnlqtdmigvcaoxlmbcelm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938876.8718867-1142-131703463081863/AnsiballZ_systemd.py'
Oct 08 15:54:38 compute-0 sudo[19708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:38 compute-0 python3.9[19710]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 15:54:40 compute-0 systemd[1]: Reloading.
Oct 08 15:54:40 compute-0 systemd-sysv-generator[19744]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:54:40 compute-0 systemd-rc-local-generator[19740]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:54:40 compute-0 systemd[1]: Starting ovn_controller container...
Oct 08 15:54:40 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 08 15:54:40 compute-0 systemd[1]: Started libcrun container.
Oct 08 15:54:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3bcab3f87507be01dbcf818c3c74e7e0035e925959d9198172c7fa1bca946b4/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 08 15:54:40 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4.
Oct 08 15:54:40 compute-0 podman[19752]: 2025-10-08 15:54:40.450141205 +0000 UTC m=+0.175330191 container init de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 08 15:54:40 compute-0 podman[19752]: 2025-10-08 15:54:40.482161089 +0000 UTC m=+0.207350035 container start de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251007)
Oct 08 15:54:40 compute-0 edpm-start-podman-container[19752]: ovn_controller
Oct 08 15:54:40 compute-0 ovn_controller[19768]: + sudo -E kolla_set_configs
Oct 08 15:54:40 compute-0 edpm-start-podman-container[19751]: Creating additional drop-in dependency for "ovn_controller" (de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4)
Oct 08 15:54:40 compute-0 systemd[1]: Reloading.
Oct 08 15:54:40 compute-0 podman[19773]: 2025-10-08 15:54:40.619889933 +0000 UTC m=+0.114421533 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 08 15:54:40 compute-0 systemd-sysv-generator[19848]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:54:40 compute-0 systemd-rc-local-generator[19844]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:54:40 compute-0 systemd[1]: de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4-4ae0d81cd22985b6.service: Main process exited, code=exited, status=1/FAILURE
Oct 08 15:54:40 compute-0 systemd[1]: de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4-4ae0d81cd22985b6.service: Failed with result 'exit-code'.
Oct 08 15:54:40 compute-0 systemd[1]: Started ovn_controller container.
Oct 08 15:54:40 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 08 15:54:40 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 08 15:54:40 compute-0 sudo[19708]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:40 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 08 15:54:40 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 08 15:54:40 compute-0 systemd[19852]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 08 15:54:41 compute-0 systemd[19852]: Queued start job for default target Main User Target.
Oct 08 15:54:41 compute-0 systemd[19852]: Created slice User Application Slice.
Oct 08 15:54:41 compute-0 systemd[19852]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 08 15:54:41 compute-0 systemd[19852]: Started Daily Cleanup of User's Temporary Directories.
Oct 08 15:54:41 compute-0 systemd[19852]: Reached target Paths.
Oct 08 15:54:41 compute-0 systemd[19852]: Reached target Timers.
Oct 08 15:54:41 compute-0 systemd[19852]: Starting D-Bus User Message Bus Socket...
Oct 08 15:54:41 compute-0 systemd[19852]: Starting Create User's Volatile Files and Directories...
Oct 08 15:54:41 compute-0 systemd[19852]: Listening on D-Bus User Message Bus Socket.
Oct 08 15:54:41 compute-0 systemd[19852]: Reached target Sockets.
Oct 08 15:54:41 compute-0 systemd[19852]: Finished Create User's Volatile Files and Directories.
Oct 08 15:54:41 compute-0 systemd[19852]: Reached target Basic System.
Oct 08 15:54:41 compute-0 systemd[19852]: Reached target Main User Target.
Oct 08 15:54:41 compute-0 systemd[19852]: Startup finished in 143ms.
Oct 08 15:54:41 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 08 15:54:41 compute-0 systemd[1]: Started Session c1 of User root.
Oct 08 15:54:41 compute-0 ovn_controller[19768]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 08 15:54:41 compute-0 ovn_controller[19768]: INFO:__main__:Validating config file
Oct 08 15:54:41 compute-0 ovn_controller[19768]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 08 15:54:41 compute-0 ovn_controller[19768]: INFO:__main__:Writing out command to execute
Oct 08 15:54:41 compute-0 ovn_controller[19768]: ++ cat /run_command
Oct 08 15:54:41 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 08 15:54:41 compute-0 ovn_controller[19768]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 08 15:54:41 compute-0 ovn_controller[19768]: + ARGS=
Oct 08 15:54:41 compute-0 ovn_controller[19768]: + sudo kolla_copy_cacerts
Oct 08 15:54:41 compute-0 systemd[1]: Started Session c2 of User root.
Oct 08 15:54:41 compute-0 ovn_controller[19768]: + [[ ! -n '' ]]
Oct 08 15:54:41 compute-0 ovn_controller[19768]: + . kolla_extend_start
Oct 08 15:54:41 compute-0 ovn_controller[19768]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 08 15:54:41 compute-0 ovn_controller[19768]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 08 15:54:41 compute-0 ovn_controller[19768]: + umask 0022
Oct 08 15:54:41 compute-0 ovn_controller[19768]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 08 15:54:41 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 08 15:54:41 compute-0 ovn_controller[19768]: 2025-10-08T15:54:41Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 08 15:54:41 compute-0 ovn_controller[19768]: 2025-10-08T15:54:41Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 08 15:54:41 compute-0 ovn_controller[19768]: 2025-10-08T15:54:41Z|00003|main|INFO|OVN internal version is : [24.09.4-20.37.0-77.8]
Oct 08 15:54:41 compute-0 ovn_controller[19768]: 2025-10-08T15:54:41Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 08 15:54:41 compute-0 ovn_controller[19768]: 2025-10-08T15:54:41Z|00005|stream_ssl|ERR|ssl:ovsdbserver-sb.openstack.svc:6642: connect: Address family not supported by protocol
Oct 08 15:54:41 compute-0 ovn_controller[19768]: 2025-10-08T15:54:41Z|00006|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 08 15:54:41 compute-0 ovn_controller[19768]: 2025-10-08T15:54:41Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connection attempt failed (Address family not supported by protocol)
Oct 08 15:54:41 compute-0 ovn_controller[19768]: 2025-10-08T15:54:41Z|00008|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 08 15:54:41 compute-0 ovn_controller[19768]: 2025-10-08T15:54:41Z|00009|ovn_util|INFO|statctrl: connecting to switch: "unix:/var/run/openvswitch/br-int.mgmt"
Oct 08 15:54:41 compute-0 ovn_controller[19768]: 2025-10-08T15:54:41Z|00010|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 08 15:54:41 compute-0 ovn_controller[19768]: 2025-10-08T15:54:41Z|00011|rconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: connection failed (No such file or directory)
Oct 08 15:54:41 compute-0 ovn_controller[19768]: 2025-10-08T15:54:41Z|00012|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: waiting 1 seconds before reconnect
Oct 08 15:54:41 compute-0 ovn_controller[19768]: 2025-10-08T15:54:41Z|00013|ovn_util|INFO|pinctrl: connecting to switch: "unix:/var/run/openvswitch/br-int.mgmt"
Oct 08 15:54:41 compute-0 ovn_controller[19768]: 2025-10-08T15:54:41Z|00014|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 08 15:54:41 compute-0 ovn_controller[19768]: 2025-10-08T15:54:41Z|00015|rconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: connection failed (No such file or directory)
Oct 08 15:54:41 compute-0 ovn_controller[19768]: 2025-10-08T15:54:41Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: waiting 1 seconds before reconnect
Oct 08 15:54:41 compute-0 NetworkManager[1034]: <info>  [1759938881.4078] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct 08 15:54:41 compute-0 NetworkManager[1034]: <info>  [1759938881.4088] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 08 15:54:41 compute-0 NetworkManager[1034]: <info>  [1759938881.4109] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Oct 08 15:54:41 compute-0 NetworkManager[1034]: <info>  [1759938881.4115] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Oct 08 15:54:41 compute-0 NetworkManager[1034]: <info>  [1759938881.4121] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 08 15:54:41 compute-0 kernel: br-int: entered promiscuous mode
Oct 08 15:54:41 compute-0 sudo[20029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgzgegtdtwllbpjzemugxzktceylkdob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938881.0731778-1198-236473093508151/AnsiballZ_command.py'
Oct 08 15:54:41 compute-0 sudo[20029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:41 compute-0 systemd-udevd[20031]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 15:54:41 compute-0 python3.9[20032]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 15:54:41 compute-0 ovs-vsctl[20035]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 08 15:54:41 compute-0 sudo[20029]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:42 compute-0 sudo[20185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzgeuqpafyxfypoaxwfuncsjulhbmxdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938881.8454945-1214-147703027859641/AnsiballZ_command.py'
Oct 08 15:54:42 compute-0 sudo[20185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00001|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00001|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 08 15:54:42 compute-0 python3.9[20187]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 15:54:42 compute-0 ovs-vsctl[20190]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00017|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00018|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00019|ovn_util|INFO|features: connecting to switch: "unix:/var/run/openvswitch/br-int.mgmt"
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00021|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00022|features|INFO|OVS Feature: ct_flush, state: supported
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00023|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00024|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00025|main|INFO|OVS feature set changed, force recompute.
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00026|ovn_util|INFO|ofctrl: connecting to switch: "unix:/var/run/openvswitch/br-int.mgmt"
Oct 08 15:54:42 compute-0 sudo[20185]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00027|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00028|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00029|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00030|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00031|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00032|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00033|features|INFO|OVS Feature: meter_support, state: supported
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00034|features|INFO|OVS Feature: group_support, state: supported
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00035|main|INFO|OVS feature set changed, force recompute.
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00036|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 08 15:54:42 compute-0 ovn_controller[19768]: 2025-10-08T15:54:42Z|00037|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 08 15:54:42 compute-0 NetworkManager[1034]: <info>  [1759938882.5082] manager: (ovn-71d4f9-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 08 15:54:42 compute-0 NetworkManager[1034]: <info>  [1759938882.5090] manager: (ovn-21ff71-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/18)
Oct 08 15:54:42 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Oct 08 15:54:42 compute-0 systemd-udevd[20034]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 15:54:42 compute-0 NetworkManager[1034]: <info>  [1759938882.5249] device (genev_sys_6081): carrier: link connected
Oct 08 15:54:42 compute-0 NetworkManager[1034]: <info>  [1759938882.5254] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/19)
Oct 08 15:54:43 compute-0 sudo[20345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfojeahgaegpexonfjqzqknsrerwtjio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938883.0137575-1242-102957706616037/AnsiballZ_command.py'
Oct 08 15:54:43 compute-0 sudo[20345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:43 compute-0 python3.9[20347]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 15:54:43 compute-0 ovs-vsctl[20348]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 08 15:54:43 compute-0 sudo[20345]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:44 compute-0 sshd-session[9265]: Connection closed by 192.168.122.30 port 44034
Oct 08 15:54:44 compute-0 sshd-session[9262]: pam_unix(sshd:session): session closed for user zuul
Oct 08 15:54:44 compute-0 systemd[1]: session-4.scope: Deactivated successfully.
Oct 08 15:54:44 compute-0 systemd[1]: session-4.scope: Consumed 49.466s CPU time.
Oct 08 15:54:44 compute-0 systemd-logind[847]: Session 4 logged out. Waiting for processes to exit.
Oct 08 15:54:44 compute-0 systemd-logind[847]: Removed session 4.
Oct 08 15:54:49 compute-0 sshd-session[20373]: Accepted publickey for zuul from 192.168.122.30 port 56142 ssh2: ECDSA SHA256:ZIjHNHNxAuv0z7dTwV8SzPT4xe1+IFvqH/0VmHWdIl4
Oct 08 15:54:49 compute-0 systemd-logind[847]: New session 6 of user zuul.
Oct 08 15:54:49 compute-0 systemd[1]: Started Session 6 of User zuul.
Oct 08 15:54:49 compute-0 sshd-session[20373]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 15:54:51 compute-0 python3.9[20526]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 15:54:51 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 08 15:54:51 compute-0 systemd[19852]: Activating special unit Exit the Session...
Oct 08 15:54:51 compute-0 systemd[19852]: Stopped target Main User Target.
Oct 08 15:54:51 compute-0 systemd[19852]: Stopped target Basic System.
Oct 08 15:54:51 compute-0 systemd[19852]: Stopped target Paths.
Oct 08 15:54:51 compute-0 systemd[19852]: Stopped target Sockets.
Oct 08 15:54:51 compute-0 systemd[19852]: Stopped target Timers.
Oct 08 15:54:51 compute-0 systemd[19852]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 08 15:54:51 compute-0 systemd[19852]: Closed D-Bus User Message Bus Socket.
Oct 08 15:54:51 compute-0 systemd[19852]: Stopped Create User's Volatile Files and Directories.
Oct 08 15:54:51 compute-0 systemd[19852]: Removed slice User Application Slice.
Oct 08 15:54:51 compute-0 systemd[19852]: Reached target Shutdown.
Oct 08 15:54:51 compute-0 systemd[19852]: Finished Exit the Session.
Oct 08 15:54:51 compute-0 systemd[19852]: Reached target Exit the Session.
Oct 08 15:54:51 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 08 15:54:51 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 08 15:54:51 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 08 15:54:51 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 08 15:54:51 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 08 15:54:51 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 08 15:54:51 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 08 15:54:51 compute-0 systemd[1324]: Starting Mark boot as successful...
Oct 08 15:54:51 compute-0 systemd[1324]: Finished Mark boot as successful.
Oct 08 15:54:51 compute-0 sudo[20682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyajspbciiwtyouwctgbuxgatczxipru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938891.517528-48-84655784380569/AnsiballZ_file.py'
Oct 08 15:54:51 compute-0 sudo[20682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:52 compute-0 python3.9[20684]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:54:52 compute-0 sudo[20682]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:52 compute-0 sudo[20834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atuvxmbaqghwamielumqxboylfoskmes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938892.3781893-48-266685248510595/AnsiballZ_file.py'
Oct 08 15:54:52 compute-0 sudo[20834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:53 compute-0 python3.9[20836]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:54:53 compute-0 sudo[20834]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:53 compute-0 sudo[20986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jajguuuobjsawcgycphiivegvnczumsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938893.235469-48-254472485208711/AnsiballZ_file.py'
Oct 08 15:54:53 compute-0 sudo[20986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:53 compute-0 python3.9[20988]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:54:53 compute-0 sudo[20986]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:54 compute-0 sudo[21138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zquxblhhxctgepapgdafvhoyudatdouj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938893.9217744-48-47096740399793/AnsiballZ_file.py'
Oct 08 15:54:54 compute-0 sudo[21138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:54 compute-0 python3.9[21140]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:54:54 compute-0 sudo[21138]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:54 compute-0 sudo[21290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvzzxtvsjptlvdhyqysqlelguzksafqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938894.5697865-48-124446993554406/AnsiballZ_file.py'
Oct 08 15:54:54 compute-0 sudo[21290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:55 compute-0 python3.9[21292]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:54:55 compute-0 sudo[21290]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:55 compute-0 python3.9[21442]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 15:54:56 compute-0 sudo[21592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isumwabwxomopbtpvvefrsuylvutbclx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938896.0888968-136-142429848847490/AnsiballZ_seboolean.py'
Oct 08 15:54:56 compute-0 sudo[21592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:54:56 compute-0 python3.9[21594]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 08 15:54:57 compute-0 sudo[21592]: pam_unix(sudo:session): session closed for user root
Oct 08 15:54:58 compute-0 python3.9[21744]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:54:59 compute-0 python3.9[21865]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759938897.6895723-152-264624409760876/.source follow=False _original_basename=haproxy.j2 checksum=3cf3adbe081d4d08ce0277c175b9a8fcf39160e3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:54:59 compute-0 python3.9[22015]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:55:00 compute-0 python3.9[22136]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759938899.3020961-182-227498891382101/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:55:01 compute-0 sudo[22287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqeeyeankjqvuglkdtyjyfvbtersppfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938900.6887414-216-154434706739044/AnsiballZ_setup.py'
Oct 08 15:55:01 compute-0 sudo[22287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:01 compute-0 python3.9[22289]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 15:55:01 compute-0 sudo[22287]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:01 compute-0 sudo[22371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suscormfratpncabtmiglhyqmnxfcucj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938900.6887414-216-154434706739044/AnsiballZ_dnf.py'
Oct 08 15:55:01 compute-0 sudo[22371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:02 compute-0 python3.9[22373]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 15:55:03 compute-0 sudo[22371]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:04 compute-0 sudo[22524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqfxpavdczkafyttgonaoomrvfoztlts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938903.8039925-240-132978238489049/AnsiballZ_systemd.py'
Oct 08 15:55:04 compute-0 sudo[22524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:04 compute-0 python3.9[22526]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 08 15:55:04 compute-0 sudo[22524]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:05 compute-0 python3.9[22679]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:55:06 compute-0 python3.9[22800]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759938904.962131-256-48857629368250/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:55:06 compute-0 python3.9[22950]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:55:07 compute-0 python3.9[23071]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759938906.1949048-256-66131316935928/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:55:08 compute-0 python3.9[23221]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:55:09 compute-0 python3.9[23342]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759938908.1039097-344-278831717920274/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:55:09 compute-0 python3.9[23492]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:55:10 compute-0 python3.9[23613]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759938909.2746506-344-146659758507313/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:55:11 compute-0 python3.9[23763]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 15:55:11 compute-0 ovn_controller[19768]: 2025-10-08T15:55:11Z|00038|memory|INFO|16128 kB peak resident set size after 30.1 seconds
Oct 08 15:55:11 compute-0 ovn_controller[19768]: 2025-10-08T15:55:11Z|00039|memory|INFO|idl-cells-OVN_Southbound:256 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:6 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:2
Oct 08 15:55:11 compute-0 podman[23836]: 2025-10-08 15:55:11.516713349 +0000 UTC m=+0.114428563 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct 08 15:55:11 compute-0 sudo[23941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkgxqhwrxuhmeklclfgvgbpbrekatlka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938911.3125072-420-236744064197956/AnsiballZ_file.py'
Oct 08 15:55:11 compute-0 sudo[23941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:11 compute-0 python3.9[23943]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:55:11 compute-0 sudo[23941]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:12 compute-0 sudo[24093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpplfqezqabcwupxyqmwwfksxmjxdymm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938912.0836494-436-132901683360575/AnsiballZ_stat.py'
Oct 08 15:55:12 compute-0 sudo[24093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:12 compute-0 python3.9[24095]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:55:12 compute-0 sudo[24093]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:12 compute-0 sudo[24171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrhewkrxlemhxpounftiijczkiexybqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938912.0836494-436-132901683360575/AnsiballZ_file.py'
Oct 08 15:55:12 compute-0 sudo[24171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:13 compute-0 python3.9[24173]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:55:13 compute-0 sudo[24171]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:13 compute-0 sudo[24323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovcxneewoqfqpxjifkgprohomxkxlcxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938913.4714217-436-118633160674695/AnsiballZ_stat.py'
Oct 08 15:55:13 compute-0 sudo[24323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:13 compute-0 python3.9[24325]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:55:14 compute-0 sudo[24323]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:14 compute-0 sudo[24401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibssatxkddfzfzqeysotrbhlrbiprypv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938913.4714217-436-118633160674695/AnsiballZ_file.py'
Oct 08 15:55:14 compute-0 sudo[24401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:14 compute-0 python3.9[24403]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:55:14 compute-0 sudo[24401]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:14 compute-0 sudo[24553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utthykkqnhtxbqjjyagavfxuesycsgyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938914.6467657-482-133941359613252/AnsiballZ_file.py'
Oct 08 15:55:14 compute-0 sudo[24553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:15 compute-0 python3.9[24555]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:55:15 compute-0 sudo[24553]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:15 compute-0 sudo[24705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsbpizizubjrdkdpglxoydwohmsliyze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938915.3556952-498-176413144775734/AnsiballZ_stat.py'
Oct 08 15:55:15 compute-0 sudo[24705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:15 compute-0 python3.9[24707]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:55:15 compute-0 sudo[24705]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:16 compute-0 sudo[24783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbzpbjeevsqeshnpdhxmexwjbpkwgqtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938915.3556952-498-176413144775734/AnsiballZ_file.py'
Oct 08 15:55:16 compute-0 sudo[24783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:16 compute-0 python3.9[24785]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:55:16 compute-0 sudo[24783]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:16 compute-0 sudo[24935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scuscwnlqhhcvpjqyekqjlxtgdtyozrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938916.5828936-522-22653344524439/AnsiballZ_stat.py'
Oct 08 15:55:16 compute-0 sudo[24935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:17 compute-0 python3.9[24937]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:55:17 compute-0 sudo[24935]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:17 compute-0 sudo[25013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlcznutdtbwztxqlmuozmgkmqcxqjfnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938916.5828936-522-22653344524439/AnsiballZ_file.py'
Oct 08 15:55:17 compute-0 sudo[25013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:17 compute-0 python3.9[25015]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:55:17 compute-0 sudo[25013]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:18 compute-0 sudo[25165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eorbvjvhdnnlzaljoetuskeojehcyylr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938917.716856-546-27025149910793/AnsiballZ_systemd.py'
Oct 08 15:55:18 compute-0 sudo[25165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:18 compute-0 python3.9[25167]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 15:55:18 compute-0 systemd[1]: Reloading.
Oct 08 15:55:18 compute-0 systemd-rc-local-generator[25191]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:55:18 compute-0 systemd-sysv-generator[25194]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:55:18 compute-0 sudo[25165]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:19 compute-0 sudo[25355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frtbadmroaovypeqxfxhwrjpsnshkxmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938918.7684867-562-97922420484818/AnsiballZ_stat.py'
Oct 08 15:55:19 compute-0 sudo[25355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:19 compute-0 python3.9[25357]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:55:19 compute-0 sudo[25355]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:19 compute-0 sudo[25433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqgethtysjcbpkegxrxetigpiqhqlqdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938918.7684867-562-97922420484818/AnsiballZ_file.py'
Oct 08 15:55:19 compute-0 sudo[25433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:19 compute-0 python3.9[25435]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:55:19 compute-0 sudo[25433]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:20 compute-0 sudo[25585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrjbwaxxiueifhsvcurdohzzgehudhdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938919.932324-586-183519634168433/AnsiballZ_stat.py'
Oct 08 15:55:20 compute-0 sudo[25585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:20 compute-0 python3.9[25587]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:55:20 compute-0 sudo[25585]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:20 compute-0 sudo[25663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgrsfuwupxeoytbffhizrmbpikilffkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938919.932324-586-183519634168433/AnsiballZ_file.py'
Oct 08 15:55:20 compute-0 sudo[25663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:20 compute-0 python3.9[25665]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:55:20 compute-0 sudo[25663]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:21 compute-0 sudo[25815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hanssvdieldiqfvothspxpgtvhtkvlfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938921.139782-610-249367856963554/AnsiballZ_systemd.py'
Oct 08 15:55:21 compute-0 sudo[25815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:21 compute-0 python3.9[25817]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 15:55:21 compute-0 systemd[1]: Reloading.
Oct 08 15:55:21 compute-0 systemd-sysv-generator[25848]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:55:21 compute-0 systemd-rc-local-generator[25844]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:55:22 compute-0 systemd[1]: Starting Create netns directory...
Oct 08 15:55:22 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 08 15:55:22 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 08 15:55:22 compute-0 systemd[1]: Finished Create netns directory.
Oct 08 15:55:22 compute-0 sudo[25815]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:22 compute-0 sudo[26008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edrcufjadgylrltfspsuvoaupvnsxpgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938922.4001586-630-261740286366669/AnsiballZ_file.py'
Oct 08 15:55:22 compute-0 sudo[26008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:22 compute-0 python3.9[26010]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:55:22 compute-0 sudo[26008]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:23 compute-0 sudo[26160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odletvoxjzocbiuupabludhqhbxuohzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938923.1418514-646-223826246709847/AnsiballZ_stat.py'
Oct 08 15:55:23 compute-0 sudo[26160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:23 compute-0 python3.9[26162]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:55:23 compute-0 sudo[26160]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:23 compute-0 sudo[26283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhnpoxiptmuqqwfjdtjkmehuidumrycq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938923.1418514-646-223826246709847/AnsiballZ_copy.py'
Oct 08 15:55:23 compute-0 sudo[26283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:24 compute-0 python3.9[26285]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759938923.1418514-646-223826246709847/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:55:24 compute-0 sudo[26283]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:24 compute-0 sudo[26435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riuqejyzmjjkfmkmbyobbpmsqomereho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938924.650935-680-260921673640142/AnsiballZ_file.py'
Oct 08 15:55:24 compute-0 sudo[26435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:25 compute-0 python3.9[26437]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:55:25 compute-0 sudo[26435]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:25 compute-0 sudo[26587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koaracsapmhvgqitvqhvvepnafmkeuko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938925.359207-696-100506757217372/AnsiballZ_stat.py'
Oct 08 15:55:25 compute-0 sudo[26587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:25 compute-0 python3.9[26589]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:55:25 compute-0 sudo[26587]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:26 compute-0 sudo[26710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzvsavvabjmaofqybcxtcqcbbmkqtrys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938925.359207-696-100506757217372/AnsiballZ_copy.py'
Oct 08 15:55:26 compute-0 sudo[26710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:26 compute-0 python3.9[26712]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759938925.359207-696-100506757217372/.source.json _original_basename=.rxlixc0m follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:55:26 compute-0 sudo[26710]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:26 compute-0 sudo[26862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slvtlvwldhrxljzbehohepihelzrxbsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938926.6334202-726-220935625209644/AnsiballZ_file.py'
Oct 08 15:55:26 compute-0 sudo[26862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:27 compute-0 python3.9[26864]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:55:27 compute-0 sudo[26862]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:27 compute-0 sudo[27014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxqwlajvjfurdaqhdoijhqpuyatkgwrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938927.2999856-742-132658881123814/AnsiballZ_stat.py'
Oct 08 15:55:27 compute-0 sudo[27014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:27 compute-0 sudo[27014]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:28 compute-0 sudo[27137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdpnljxddzdeyjpskvyscynhmtabepna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938927.2999856-742-132658881123814/AnsiballZ_copy.py'
Oct 08 15:55:28 compute-0 sudo[27137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:28 compute-0 sudo[27137]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:29 compute-0 sudo[27289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erwjewgmecdrtcvqihpveguwqdeujkcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938928.675308-776-176250940187459/AnsiballZ_container_config_data.py'
Oct 08 15:55:29 compute-0 sudo[27289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:29 compute-0 python3.9[27291]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 08 15:55:29 compute-0 sudo[27289]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:30 compute-0 sudo[27441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jocphdqsfdastashaaehvbmbkrtvfitb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938929.6267416-794-93933651521266/AnsiballZ_container_config_hash.py'
Oct 08 15:55:30 compute-0 sudo[27441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:30 compute-0 python3.9[27443]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 08 15:55:30 compute-0 sudo[27441]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:30 compute-0 sudo[27593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otciaakqbqokkrnryibnkkailkxdlkls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938930.545361-812-264796564667537/AnsiballZ_podman_container_info.py'
Oct 08 15:55:30 compute-0 sudo[27593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:31 compute-0 python3.9[27595]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 08 15:55:31 compute-0 sudo[27593]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:32 compute-0 sudo[27771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrkqndvjbxknoiuqymfnxijdhtmgdyev ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759938931.7577736-838-141490673267346/AnsiballZ_edpm_container_manage.py'
Oct 08 15:55:32 compute-0 sudo[27771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:32 compute-0 python3[27773]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 08 15:55:32 compute-0 podman[27811]: 2025-10-08 15:55:32.779773527 +0000 UTC m=+0.064014665 container create c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 15:55:32 compute-0 podman[27811]: 2025-10-08 15:55:32.739856847 +0000 UTC m=+0.024098005 image pull 1b705be0a2473f9551d4f3571c1e8fc1b0bd84e013684239de53078e70a4b6e3 38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 08 15:55:32 compute-0 python3[27773]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z 38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 08 15:55:32 compute-0 sudo[27771]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:33 compute-0 sudo[28000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anqonmcbgrrviqocribtagegvfpxbkqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938933.0990381-854-278420996439305/AnsiballZ_stat.py'
Oct 08 15:55:33 compute-0 sudo[28000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:33 compute-0 python3.9[28002]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 15:55:33 compute-0 sudo[28000]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:34 compute-0 sudo[28154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdefouuyjvvbyqvljxmewuiqmyqdsxbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938933.9156253-872-275996969617673/AnsiballZ_file.py'
Oct 08 15:55:34 compute-0 sudo[28154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:34 compute-0 python3.9[28156]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:55:34 compute-0 sudo[28154]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:34 compute-0 sudo[28230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvtfoykivzkkoljbrfkzsllyqbiosghu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938933.9156253-872-275996969617673/AnsiballZ_stat.py'
Oct 08 15:55:34 compute-0 sudo[28230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:34 compute-0 python3.9[28232]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 15:55:34 compute-0 sudo[28230]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:35 compute-0 sudo[28381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnnuupfxjatbetgupqpghrofqotmlutz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938934.9943576-872-93236200267498/AnsiballZ_copy.py'
Oct 08 15:55:35 compute-0 sudo[28381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:35 compute-0 python3.9[28383]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759938934.9943576-872-93236200267498/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:55:35 compute-0 sudo[28381]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:35 compute-0 sudo[28457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqtclayenqjnpaiqcnhseebznqatauvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938934.9943576-872-93236200267498/AnsiballZ_systemd.py'
Oct 08 15:55:35 compute-0 sudo[28457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:36 compute-0 python3.9[28459]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 15:55:36 compute-0 systemd[1]: Reloading.
Oct 08 15:55:36 compute-0 systemd-rc-local-generator[28487]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:55:36 compute-0 systemd-sysv-generator[28490]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:55:36 compute-0 sudo[28457]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:36 compute-0 sudo[28568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnrrmzwhhpxiqsrixaeelrdkoorkrfki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938934.9943576-872-93236200267498/AnsiballZ_systemd.py'
Oct 08 15:55:36 compute-0 sudo[28568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:37 compute-0 python3.9[28570]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 15:55:37 compute-0 systemd[1]: Reloading.
Oct 08 15:55:37 compute-0 systemd-rc-local-generator[28601]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:55:37 compute-0 systemd-sysv-generator[28605]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:55:37 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Oct 08 15:55:37 compute-0 systemd[1]: Started libcrun container.
Oct 08 15:55:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/319e9054a338efe840162a14e2863aac143e5ef2e12344626a4c60009ea53c3d/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 08 15:55:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/319e9054a338efe840162a14e2863aac143e5ef2e12344626a4c60009ea53c3d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 08 15:55:37 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a.
Oct 08 15:55:37 compute-0 podman[28612]: 2025-10-08 15:55:37.602720751 +0000 UTC m=+0.141912329 container init c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: + sudo -E kolla_set_configs
Oct 08 15:55:37 compute-0 podman[28612]: 2025-10-08 15:55:37.633447336 +0000 UTC m=+0.172638894 container start c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent)
Oct 08 15:55:37 compute-0 edpm-start-podman-container[28612]: ovn_metadata_agent
Oct 08 15:55:37 compute-0 edpm-start-podman-container[28611]: Creating additional drop-in dependency for "ovn_metadata_agent" (c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a)
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: INFO:__main__:Validating config file
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: INFO:__main__:Copying service configuration files
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: INFO:__main__:Writing out command to execute
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 08 15:55:37 compute-0 systemd[1]: Reloading.
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 08 15:55:37 compute-0 podman[28634]: 2025-10-08 15:55:37.735271979 +0000 UTC m=+0.086381899 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: ++ cat /run_command
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: + CMD=neutron-ovn-metadata-agent
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: + ARGS=
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: + sudo kolla_copy_cacerts
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: + [[ ! -n '' ]]
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: + . kolla_extend_start
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: Running command: 'neutron-ovn-metadata-agent'
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: + umask 0022
Oct 08 15:55:37 compute-0 ovn_metadata_agent[28628]: + exec neutron-ovn-metadata-agent
Oct 08 15:55:37 compute-0 systemd-rc-local-generator[28705]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:55:37 compute-0 systemd-sysv-generator[28709]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:55:37 compute-0 systemd[1]: Started ovn_metadata_agent container.
Oct 08 15:55:38 compute-0 sudo[28568]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:38 compute-0 sshd-session[20376]: Connection closed by 192.168.122.30 port 56142
Oct 08 15:55:38 compute-0 sshd-session[20373]: pam_unix(sshd:session): session closed for user zuul
Oct 08 15:55:38 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Oct 08 15:55:38 compute-0 systemd[1]: session-6.scope: Consumed 36.460s CPU time.
Oct 08 15:55:38 compute-0 systemd-logind[847]: Session 6 logged out. Waiting for processes to exit.
Oct 08 15:55:38 compute-0 systemd-logind[847]: Removed session 6.
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.812 28633 INFO neutron.common.config [-] Logging enabled!
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.812 28633 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 26.1.0.dev268
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.812 28633 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.12/site-packages/neutron/common/config.py:124
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.813 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.813 28633 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.813 28633 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.813 28633 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.813 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.813 28633 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.813 28633 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.814 28633 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.814 28633 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.814 28633 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.814 28633 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.814 28633 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.814 28633 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.814 28633 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.814 28633 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.814 28633 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.814 28633 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.814 28633 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.814 28633 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.814 28633 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.814 28633 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.815 28633 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.815 28633 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.815 28633 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.815 28633 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.815 28633 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.815 28633 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.815 28633 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.815 28633 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.815 28633 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.815 28633 DEBUG neutron.agent.ovn.metadata_agent [-] enable_signals                 = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.815 28633 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.815 28633 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.815 28633 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.816 28633 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.816 28633 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.816 28633 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.816 28633 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.816 28633 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.816 28633 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.816 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.816 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.816 28633 DEBUG neutron.agent.ovn.metadata_agent [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.816 28633 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.816 28633 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.816 28633 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.816 28633 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.817 28633 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.817 28633 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.817 28633 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.817 28633 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.817 28633 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.817 28633 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.817 28633 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.817 28633 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.817 28633 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.817 28633 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.817 28633 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.817 28633 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.817 28633 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.817 28633 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.818 28633 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.818 28633 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.818 28633 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.818 28633 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.818 28633 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.818 28633 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.818 28633 DEBUG neutron.agent.ovn.metadata_agent [-] my_ip                          = 38.102.83.199 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.818 28633 DEBUG neutron.agent.ovn.metadata_agent [-] my_ipv6                        = ::1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.818 28633 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.818 28633 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.818 28633 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.818 28633 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.818 28633 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.819 28633 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.819 28633 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.819 28633 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.819 28633 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.819 28633 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.819 28633 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.819 28633 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.819 28633 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.819 28633 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.819 28633 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.819 28633 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.819 28633 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.820 28633 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.820 28633 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.820 28633 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.820 28633 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.820 28633 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.820 28633 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.820 28633 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.820 28633 DEBUG neutron.agent.ovn.metadata_agent [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.820 28633 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.820 28633 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.820 28633 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.820 28633 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.820 28633 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.821 28633 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.821 28633 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.821 28633 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.821 28633 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.821 28633 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_qinq                      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.821 28633 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.821 28633 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.821 28633 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.821 28633 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.821 28633 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.821 28633 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.821 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.821 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.822 28633 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.822 28633 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.822 28633 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.822 28633 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.822 28633 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.822 28633 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.822 28633 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.822 28633 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.822 28633 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.822 28633 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_requests        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.822 28633 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.822 28633 DEBUG neutron.agent.ovn.metadata_agent [-] profiler_jaeger.process_tags   = {} log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.822 28633 DEBUG neutron.agent.ovn.metadata_agent [-] profiler_jaeger.service_name_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.823 28633 DEBUG neutron.agent.ovn.metadata_agent [-] profiler_otlp.service_name_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.823 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.823 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.823 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.823 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.823 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.823 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.823 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.823 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.823 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.823 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.823 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.823 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.824 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.824 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.824 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.824 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_timeout     = 60.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.824 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.824 28633 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.824 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.824 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.824 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.824 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.log_daemon_traceback   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.824 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.825 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.825 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.825 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.825 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.825 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.825 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.825 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.825 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.825 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.825 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.825 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.825 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.825 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.826 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.826 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.826 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.826 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.826 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.826 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.826 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.826 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.826 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.826 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.826 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.826 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.827 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.827 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.827 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.827 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.827 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.827 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.827 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.827 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.827 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.827 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.827 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.827 28633 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.827 28633 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.827 28633 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.828 28633 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.828 28633 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.828 28633 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.828 28633 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.828 28633 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.828 28633 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.828 28633 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.828 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.828 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mappings            = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.828 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.datapath_type              = system log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.828 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_flood                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.828 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_flood_reports         = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.829 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_flood_unregistered    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.829 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.829 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.int_peer_patch_port        = patch-tun log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.829 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.integration_bridge         = br-int log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.829 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.local_ip                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.829 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.of_connect_timeout         = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.829 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.of_inactivity_probe        = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.829 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.of_listen_address          = 127.0.0.1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.829 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.of_listen_port             = 6633 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.829 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.of_request_timeout         = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.829 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.openflow_processed_per_port = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.829 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.829 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_debug                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.829 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.830 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.qos_meter_bandwidth        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.830 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.resource_provider_bandwidths = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.830 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.resource_provider_default_hypervisor = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.830 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.resource_provider_hypervisors = {} log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.830 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.resource_provider_inventory_defaults = {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0} log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.830 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.resource_provider_packet_processing_inventory_defaults = {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0} log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.830 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.resource_provider_packet_processing_with_direction = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.830 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.resource_provider_packet_processing_without_direction = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.830 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ssl_ca_cert_file           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.830 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ssl_cert_file              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.830 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ssl_key_file               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.830 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.tun_peer_patch_port        = patch-int log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.830 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.tunnel_bridge              = br-tun log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.830 28633 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.vhostuser_socket_dir       = /var/run/openvswitch log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.831 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.831 28633 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.831 28633 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.831 28633 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.831 28633 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.831 28633 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.831 28633 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.831 28633 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.831 28633 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.831 28633 DEBUG neutron.agent.ovn.metadata_agent [-] agent.extensions               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.831 28633 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.831 28633 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.831 28633 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.832 28633 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.832 28633 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.832 28633 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.832 28633 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.832 28633 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.832 28633 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.832 28633 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.832 28633 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.832 28633 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.832 28633 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.832 28633 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.832 28633 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.832 28633 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.832 28633 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.833 28633 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.833 28633 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.833 28633 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.833 28633 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.833 28633 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.833 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.833 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.833 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.833 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.833 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.833 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.833 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.833 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.833 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.833 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.834 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.834 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.834 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.834 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.834 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.834 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.retriable_status_codes  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.834 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.834 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.834 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.834 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.834 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.834 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.834 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.834 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.834 28633 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.835 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.broadcast_arps_to_all_routers = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.835 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.835 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.835 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_records_ovn_owned      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.835 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.835 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.835 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.fdb_age_threshold          = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.835 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.live_migration_activation_strategy = rarp log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.835 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.localnet_learn_fdb         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.835 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.mac_binding_age_threshold  = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.835 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.835 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.835 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.836 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.836 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.836 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.836 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.836 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.836 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = ['tcp:127.0.0.1:6641'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.836 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.836 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_router_indirect_snat   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.836 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.836 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.836 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ['ssl:ovsdbserver-sb.openstack.svc:6642'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.836 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.836 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.836 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.837 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.837 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.837 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.837 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn_nb_global.fdb_removal_limit = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.837 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn_nb_global.ignore_lsp_down  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.837 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovn_nb_global.mac_binding_removal_limit = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.837 28633 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_rate_limiting.base_query_rate_limit = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.837 28633 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_rate_limiting.base_window_duration = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.837 28633 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_rate_limiting.burst_query_rate_limit = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.837 28633 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_rate_limiting.burst_window_duration = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.837 28633 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_rate_limiting.ip_versions = [4] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.837 28633 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_rate_limiting.rate_limit_enabled = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.837 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.838 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.838 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.838 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.838 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.838 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.838 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.838 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.838 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.838 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.838 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.838 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.hostname = compute-0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.838 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.838 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.838 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.838 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.839 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_splay = 0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.839 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.processname = neutron-ovn-metadata-agent log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.839 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.839 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.839 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.839 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.839 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.839 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.839 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.839 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.839 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.839 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.839 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_stream_fanout = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.839 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.840 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_quorum_queue = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.840 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.840 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.840 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.840 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.840 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.840 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.840 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.840 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.use_queue_manager = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.840 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.840 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.840 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.840 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.841 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.841 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.841 28633 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.841 28633 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.893 28633 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.893 28633 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.893 28633 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.893 28633 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.893 28633 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.904 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name f72d8dca-98f2-44ea-b875-cd9a8b583db6 (UUID: f72d8dca-98f2-44ea-b875-cd9a8b583db6) and ovn bridge br-int. _load_config /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:419
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.940 28633 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.940 28633 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.940 28633 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Port_Binding.logical_port autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.941 28633 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.941 28633 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.945 28633 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.949 28633 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.956 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'f72d8dca-98f2-44ea-b875-cd9a8b583db6'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], external_ids={}, name=f72d8dca-98f2-44ea-b875-cd9a8b583db6, nb_cfg_timestamp=1759938890506, nb_cfg=1) old= matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 15:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:41.960 28633 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpghl029oq/privsep.sock']
Oct 08 15:55:42 compute-0 podman[28751]: 2025-10-08 15:55:42.502810993 +0000 UTC m=+0.105217202 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 08 15:55:42 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 08 15:55:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:42.776 28633 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 08 15:55:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:42.776 28633 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpghl029oq/privsep.sock __init__ /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:377
Oct 08 15:55:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:42.591 28777 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 08 15:55:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:42.595 28777 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 08 15:55:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:42.599 28777 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 08 15:55:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:42.599 28777 INFO oslo.privsep.daemon [-] privsep daemon running as pid 28777
Oct 08 15:55:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:42.778 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[011f48ce-dc41-4126-86ee-0ba90a99af11]: (2,) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 15:55:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:43.300 28777 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 15:55:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:43.301 28777 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 15:55:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:43.301 28777 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 15:55:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:43.878 28777 INFO oslo_service.backend [-] Loading backend: eventlet
Oct 08 15:55:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:43.889 28777 INFO oslo_service.backend [-] Backend 'eventlet' successfully loaded and cached.
Oct 08 15:55:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:43.928 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[92c61129-209b-4e6d-8acd-fd48052b7d81]: (4, []) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 15:55:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:43.930 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, column=external_ids, values=({'neutron:ovn-metadata-id': '56ca5fa6-51af-5b38-8df9-fdfc3b1586f1'},)) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 15:55:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:43.942 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 15:55:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:55:43.949 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '1'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 15:55:44 compute-0 sshd-session[28782]: Accepted publickey for zuul from 192.168.122.30 port 53930 ssh2: ECDSA SHA256:ZIjHNHNxAuv0z7dTwV8SzPT4xe1+IFvqH/0VmHWdIl4
Oct 08 15:55:44 compute-0 systemd-logind[847]: New session 7 of user zuul.
Oct 08 15:55:44 compute-0 systemd[1]: Started Session 7 of User zuul.
Oct 08 15:55:44 compute-0 sshd-session[28782]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 15:55:45 compute-0 python3.9[28935]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 15:55:46 compute-0 sudo[29089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukarbrsbjneiluiufpgfuctaycimybbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938945.8022661-48-263307624311758/AnsiballZ_command.py'
Oct 08 15:55:46 compute-0 sudo[29089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:46 compute-0 python3.9[29091]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 15:55:46 compute-0 sudo[29089]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:47 compute-0 sudo[29254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txgbezghibjwuwrceqaknggfalweyrfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938946.9048097-70-113672747228525/AnsiballZ_systemd_service.py'
Oct 08 15:55:47 compute-0 sudo[29254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:47 compute-0 python3.9[29256]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 15:55:47 compute-0 systemd[1]: Reloading.
Oct 08 15:55:47 compute-0 systemd-sysv-generator[29285]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:55:47 compute-0 systemd-rc-local-generator[29279]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:55:48 compute-0 sudo[29254]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:48 compute-0 python3.9[29441]: ansible-ansible.builtin.service_facts Invoked
Oct 08 15:55:48 compute-0 network[29458]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 08 15:55:48 compute-0 network[29459]: 'network-scripts' will be removed from distribution in near future.
Oct 08 15:55:48 compute-0 network[29460]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 08 15:55:52 compute-0 sudo[29722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwdhsfgmufbrvvhznmxthrghfumickmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938952.3089485-108-107384702586267/AnsiballZ_systemd_service.py'
Oct 08 15:55:52 compute-0 sudo[29722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:52 compute-0 python3.9[29724]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 15:55:52 compute-0 sudo[29722]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:53 compute-0 sudo[29875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbuqwutbwxuhqscbjxyvwlmvmczihcal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938953.1032543-108-175904555221530/AnsiballZ_systemd_service.py'
Oct 08 15:55:53 compute-0 sudo[29875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:53 compute-0 python3.9[29877]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 15:55:53 compute-0 sudo[29875]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:54 compute-0 sudo[30028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfrszrhdgnmgmbkjhktufipxfnrpmyxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938953.9332714-108-100977381213189/AnsiballZ_systemd_service.py'
Oct 08 15:55:54 compute-0 sudo[30028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:54 compute-0 python3.9[30030]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 15:55:54 compute-0 sudo[30028]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:55 compute-0 sudo[30181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imwfsveibroihjeztnmjtorneoblpdcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938954.7539034-108-98816979564024/AnsiballZ_systemd_service.py'
Oct 08 15:55:55 compute-0 sudo[30181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:55 compute-0 python3.9[30183]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 15:55:55 compute-0 sudo[30181]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:55 compute-0 sudo[30334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdqasifahpregwhcxaezigayuwffllfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938955.5830376-108-4341890089093/AnsiballZ_systemd_service.py'
Oct 08 15:55:55 compute-0 sudo[30334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:56 compute-0 python3.9[30336]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 15:55:56 compute-0 sudo[30334]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:56 compute-0 sudo[30487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlwwvfjxsxvrrvucpbgqdthvlasvbeqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938956.4212706-108-123822800003544/AnsiballZ_systemd_service.py'
Oct 08 15:55:56 compute-0 sudo[30487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:57 compute-0 python3.9[30489]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 15:55:57 compute-0 sudo[30487]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:57 compute-0 sudo[30640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euwkagahvfknslsfhxmxyeoxzszonrux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938957.2088282-108-93698211041573/AnsiballZ_systemd_service.py'
Oct 08 15:55:57 compute-0 sudo[30640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:57 compute-0 python3.9[30642]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 15:55:57 compute-0 sudo[30640]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:58 compute-0 sudo[30793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnobfgpxzolpjxmnsvhfpkkjjvmbrvhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938958.2496846-212-82361506697615/AnsiballZ_file.py'
Oct 08 15:55:58 compute-0 sudo[30793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:58 compute-0 python3.9[30795]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:55:59 compute-0 sudo[30793]: pam_unix(sudo:session): session closed for user root
Oct 08 15:55:59 compute-0 sudo[30945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hncdwzwynhpalnadzrbhorzbmdqmndti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938959.22632-212-214582490833880/AnsiballZ_file.py'
Oct 08 15:55:59 compute-0 sudo[30945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:55:59 compute-0 python3.9[30947]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:55:59 compute-0 sudo[30945]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:00 compute-0 sudo[31097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbnamczxuznwnbwdvukcrwfraoarbbpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938959.898006-212-180751898942205/AnsiballZ_file.py'
Oct 08 15:56:00 compute-0 sudo[31097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:00 compute-0 python3.9[31099]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:56:00 compute-0 sudo[31097]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:00 compute-0 sudo[31249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeueuuagiitizkvacesyqmnppklzjqin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938960.5391839-212-57003232782371/AnsiballZ_file.py'
Oct 08 15:56:00 compute-0 sudo[31249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:01 compute-0 python3.9[31251]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:56:01 compute-0 sudo[31249]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:01 compute-0 sudo[31401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfwgwfoofgpleqkckorlxsuisnieufmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938961.1964283-212-14257910621823/AnsiballZ_file.py'
Oct 08 15:56:01 compute-0 sudo[31401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:01 compute-0 python3.9[31403]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:56:01 compute-0 sudo[31401]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:02 compute-0 sudo[31553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fslyddszufnctivzorinygzupvinblhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938961.8271713-212-60989723431415/AnsiballZ_file.py'
Oct 08 15:56:02 compute-0 sudo[31553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:02 compute-0 python3.9[31555]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:56:02 compute-0 sudo[31553]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:02 compute-0 sudo[31705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdetfwnsqdxaoqbwmvyazqfujnpbsmmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938962.4849102-212-251553601464099/AnsiballZ_file.py'
Oct 08 15:56:02 compute-0 sudo[31705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:02 compute-0 python3.9[31707]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:56:02 compute-0 sudo[31705]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:03 compute-0 sudo[31857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wczopogciseunippsejcqaebpkopegfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938963.2006705-312-44192265143749/AnsiballZ_file.py'
Oct 08 15:56:03 compute-0 sudo[31857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:03 compute-0 python3.9[31859]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:56:03 compute-0 sudo[31857]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:04 compute-0 sudo[32009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uboythzcaoxheficnhwdvawxgcjffvfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938963.8444924-312-135041400423645/AnsiballZ_file.py'
Oct 08 15:56:04 compute-0 sudo[32009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:04 compute-0 python3.9[32011]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:56:04 compute-0 sudo[32009]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:04 compute-0 sudo[32161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxewowtuiwnfbikgvbutchukpoyyydtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938964.519549-312-256407646589027/AnsiballZ_file.py'
Oct 08 15:56:04 compute-0 sudo[32161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:05 compute-0 python3.9[32163]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:56:05 compute-0 sudo[32161]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:05 compute-0 sudo[32313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkhiryswobbqbvhaddmhksifxzbvnqlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938965.3487225-312-127077251395777/AnsiballZ_file.py'
Oct 08 15:56:05 compute-0 sudo[32313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:05 compute-0 python3.9[32315]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:56:05 compute-0 sudo[32313]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:06 compute-0 sudo[32465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygmqjimgqwcbdmuthnhsncjiofdfotun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938966.0085585-312-160947267198130/AnsiballZ_file.py'
Oct 08 15:56:06 compute-0 sudo[32465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:06 compute-0 python3.9[32467]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:56:06 compute-0 sudo[32465]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:06 compute-0 sudo[32617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwnmmyejwhjxgauzruqidsereqyyjklh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938966.6488564-312-190330501857373/AnsiballZ_file.py'
Oct 08 15:56:06 compute-0 sudo[32617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:07 compute-0 python3.9[32619]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:56:07 compute-0 sudo[32617]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:07 compute-0 sudo[32769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkxjxhkmsssdukavdstroifgkmbnyres ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938967.3060145-312-281150378192641/AnsiballZ_file.py'
Oct 08 15:56:07 compute-0 sudo[32769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:07 compute-0 python3.9[32771]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:56:07 compute-0 sudo[32769]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:08 compute-0 sudo[32932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujleiyzvzatevgoxbbnwzgbtevorvgzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938968.0741673-414-195694284863984/AnsiballZ_command.py'
Oct 08 15:56:08 compute-0 sudo[32932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:08 compute-0 podman[32895]: 2025-10-08 15:56:08.379893887 +0000 UTC m=+0.064142458 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=watcher_latest)
Oct 08 15:56:08 compute-0 python3.9[32940]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                              systemctl disable --now certmonger.service
                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                            fi
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 15:56:08 compute-0 sudo[32932]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:09 compute-0 python3.9[33092]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 08 15:56:09 compute-0 sudo[33242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkhguuaqxlcdlyyfcfsvdxqrovwqmzpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938969.646975-450-35100563824884/AnsiballZ_systemd_service.py'
Oct 08 15:56:09 compute-0 sudo[33242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:10 compute-0 python3.9[33244]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 15:56:10 compute-0 systemd[1]: Reloading.
Oct 08 15:56:10 compute-0 systemd-sysv-generator[33273]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:56:10 compute-0 systemd-rc-local-generator[33270]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:56:10 compute-0 sudo[33242]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:10 compute-0 sudo[33429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcdmgpkraccpeaqbxbcqvgmhwyntdhiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938970.6910524-466-135709864039975/AnsiballZ_command.py'
Oct 08 15:56:10 compute-0 sudo[33429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:11 compute-0 python3.9[33431]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 15:56:11 compute-0 sudo[33429]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:11 compute-0 sudo[33582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcwryqhevegqavgdggdnhkweigqsheuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938971.3739278-466-38245758985795/AnsiballZ_command.py'
Oct 08 15:56:11 compute-0 sudo[33582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:11 compute-0 python3.9[33584]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 15:56:11 compute-0 sudo[33582]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:12 compute-0 sudo[33735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmdmsocphzixitektofdifvhgkfioimj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938972.0133402-466-272746855995797/AnsiballZ_command.py'
Oct 08 15:56:12 compute-0 sudo[33735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:12 compute-0 python3.9[33737]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 15:56:12 compute-0 sudo[33735]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:12 compute-0 podman[33739]: 2025-10-08 15:56:12.681892563 +0000 UTC m=+0.098455866 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Oct 08 15:56:13 compute-0 sudo[33914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmjnkcxwecytodaqmsnmuhacvdhmhmws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938972.7378507-466-99534408863497/AnsiballZ_command.py'
Oct 08 15:56:13 compute-0 sudo[33914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:13 compute-0 python3.9[33916]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 15:56:13 compute-0 sudo[33914]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:13 compute-0 sudo[34067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjnuvtnmbobxgpimdnffqckznangqjyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938973.4066746-466-94052823240376/AnsiballZ_command.py'
Oct 08 15:56:13 compute-0 sudo[34067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:13 compute-0 python3.9[34069]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 15:56:13 compute-0 sudo[34067]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:14 compute-0 sudo[34220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyarbugocmjmvrtzraahjdcltfpvxglk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938974.0795653-466-188139351341235/AnsiballZ_command.py'
Oct 08 15:56:14 compute-0 sudo[34220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:14 compute-0 python3.9[34222]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 15:56:14 compute-0 sudo[34220]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:15 compute-0 sudo[34373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxaktwhqeegfwleauidplzvalokczqqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938974.8630004-466-29117684437361/AnsiballZ_command.py'
Oct 08 15:56:15 compute-0 sudo[34373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:15 compute-0 python3.9[34375]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 15:56:15 compute-0 sudo[34373]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:16 compute-0 sudo[34526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spuwhmzljtffrftbxgeeceaoesovoeuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938975.8745177-574-156204260952563/AnsiballZ_getent.py'
Oct 08 15:56:16 compute-0 sudo[34526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:16 compute-0 python3.9[34528]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 08 15:56:16 compute-0 sudo[34526]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:17 compute-0 sudo[34679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyofjzxjrepcryoalnszdazduvhpxnwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938976.7942922-590-198705473424938/AnsiballZ_group.py'
Oct 08 15:56:17 compute-0 sudo[34679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:17 compute-0 python3.9[34681]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 08 15:56:17 compute-0 groupadd[34682]: group added to /etc/group: name=libvirt, GID=42473
Oct 08 15:56:18 compute-0 groupadd[34682]: group added to /etc/gshadow: name=libvirt
Oct 08 15:56:18 compute-0 groupadd[34682]: new group: name=libvirt, GID=42473
Oct 08 15:56:18 compute-0 sudo[34679]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:18 compute-0 sudo[34837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzulelwtfscrpeambaylolwjgulumtjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938978.2651527-606-60261979890262/AnsiballZ_user.py'
Oct 08 15:56:18 compute-0 sudo[34837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:19 compute-0 python3.9[34839]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 08 15:56:19 compute-0 useradd[34841]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Oct 08 15:56:20 compute-0 sudo[34837]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:21 compute-0 sudo[34997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jivmynsrfnzublrbzbiynprdkchvlxpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938981.2233078-628-270887310715009/AnsiballZ_setup.py'
Oct 08 15:56:21 compute-0 sudo[34997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:21 compute-0 python3.9[34999]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 15:56:22 compute-0 sudo[34997]: pam_unix(sudo:session): session closed for user root
Oct 08 15:56:22 compute-0 sudo[35081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trowoambiryldejszwkyshbcobjoxqdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759938981.2233078-628-270887310715009/AnsiballZ_dnf.py'
Oct 08 15:56:22 compute-0 sudo[35081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:56:22 compute-0 python3.9[35083]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 15:56:39 compute-0 podman[35267]: 2025-10-08 15:56:39.462032386 +0000 UTC m=+0.070190336 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 08 15:56:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:56:41.843 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 15:56:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:56:41.844 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 15:56:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:56:41.844 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 15:56:43 compute-0 podman[35295]: 2025-10-08 15:56:43.533062596 +0000 UTC m=+0.139580828 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 08 15:56:55 compute-0 kernel: SELinux:  Converting 430 SID table entries...
Oct 08 15:56:55 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 08 15:56:55 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 08 15:56:55 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 08 15:56:55 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 08 15:56:55 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 08 15:56:55 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 08 15:56:55 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 08 15:57:07 compute-0 kernel: SELinux:  Converting 430 SID table entries...
Oct 08 15:57:07 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 08 15:57:07 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 08 15:57:07 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 08 15:57:07 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 08 15:57:07 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 08 15:57:07 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 08 15:57:07 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 08 15:57:10 compute-0 dbus-broker-launch[839]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Oct 08 15:57:10 compute-0 podman[35339]: 2025-10-08 15:57:10.471800986 +0000 UTC m=+0.061686322 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 08 15:57:14 compute-0 podman[35359]: 2025-10-08 15:57:14.487929661 +0000 UTC m=+0.093129324 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 08 15:57:41 compute-0 podman[47952]: 2025-10-08 15:57:41.456923943 +0000 UTC m=+0.062438080 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Oct 08 15:57:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:57:41.845 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 15:57:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:57:41.845 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 15:57:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:57:41.845 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 15:57:45 compute-0 podman[50251]: 2025-10-08 15:57:45.520489231 +0000 UTC m=+0.125119716 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 15:58:02 compute-0 kernel: SELinux:  Converting 431 SID table entries...
Oct 08 15:58:02 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Oct 08 15:58:02 compute-0 kernel: SELinux:  policy capability open_perms=1
Oct 08 15:58:02 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Oct 08 15:58:02 compute-0 kernel: SELinux:  policy capability always_check_network=0
Oct 08 15:58:02 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 08 15:58:02 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 08 15:58:02 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 08 15:58:03 compute-0 groupadd[52200]: group added to /etc/group: name=dnsmasq, GID=992
Oct 08 15:58:03 compute-0 groupadd[52200]: group added to /etc/gshadow: name=dnsmasq
Oct 08 15:58:04 compute-0 groupadd[52200]: new group: name=dnsmasq, GID=992
Oct 08 15:58:04 compute-0 useradd[52207]: new user: name=dnsmasq, UID=992, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Oct 08 15:58:04 compute-0 dbus-broker-launch[838]: Noticed file-system modification, trigger reload.
Oct 08 15:58:04 compute-0 dbus-broker-launch[839]: avc:  op=load_policy lsm=selinux seqno=5 res=1
Oct 08 15:58:04 compute-0 dbus-broker-launch[838]: Noticed file-system modification, trigger reload.
Oct 08 15:58:05 compute-0 groupadd[52220]: group added to /etc/group: name=clevis, GID=991
Oct 08 15:58:05 compute-0 groupadd[52220]: group added to /etc/gshadow: name=clevis
Oct 08 15:58:05 compute-0 groupadd[52220]: new group: name=clevis, GID=991
Oct 08 15:58:05 compute-0 useradd[52227]: new user: name=clevis, UID=991, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Oct 08 15:58:05 compute-0 usermod[52237]: add 'clevis' to group 'tss'
Oct 08 15:58:05 compute-0 usermod[52237]: add 'clevis' to shadow group 'tss'
Oct 08 15:58:07 compute-0 polkitd[1182]: Reloading rules
Oct 08 15:58:07 compute-0 polkitd[1182]: Collecting garbage unconditionally...
Oct 08 15:58:07 compute-0 polkitd[1182]: Loading rules from directory /etc/polkit-1/rules.d
Oct 08 15:58:07 compute-0 polkitd[1182]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 08 15:58:07 compute-0 polkitd[1182]: Finished loading, compiling and executing 4 rules
Oct 08 15:58:07 compute-0 polkitd[1182]: Reloading rules
Oct 08 15:58:07 compute-0 polkitd[1182]: Collecting garbage unconditionally...
Oct 08 15:58:07 compute-0 polkitd[1182]: Loading rules from directory /etc/polkit-1/rules.d
Oct 08 15:58:07 compute-0 polkitd[1182]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 08 15:58:07 compute-0 polkitd[1182]: Finished loading, compiling and executing 4 rules
Oct 08 15:58:09 compute-0 groupadd[52424]: group added to /etc/group: name=ceph, GID=167
Oct 08 15:58:09 compute-0 groupadd[52424]: group added to /etc/gshadow: name=ceph
Oct 08 15:58:09 compute-0 groupadd[52424]: new group: name=ceph, GID=167
Oct 08 15:58:09 compute-0 useradd[52430]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Oct 08 15:58:12 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Oct 08 15:58:12 compute-0 sshd[1297]: Received signal 15; terminating.
Oct 08 15:58:12 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Oct 08 15:58:12 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Oct 08 15:58:12 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Oct 08 15:58:12 compute-0 systemd[1]: Stopping sshd-keygen.target...
Oct 08 15:58:12 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 08 15:58:12 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 08 15:58:12 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 08 15:58:12 compute-0 systemd[1]: Reached target sshd-keygen.target.
Oct 08 15:58:12 compute-0 systemd[1]: Starting OpenSSH server daemon...
Oct 08 15:58:12 compute-0 sshd[52944]: Server listening on 0.0.0.0 port 22.
Oct 08 15:58:12 compute-0 sshd[52944]: Server listening on :: port 22.
Oct 08 15:58:12 compute-0 systemd[1]: Started OpenSSH server daemon.
Oct 08 15:58:12 compute-0 podman[52931]: 2025-10-08 15:58:12.226053706 +0000 UTC m=+0.072747828 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 08 15:58:14 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 08 15:58:14 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 08 15:58:14 compute-0 systemd[1]: Reloading.
Oct 08 15:58:14 compute-0 systemd-rc-local-generator[53211]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:58:14 compute-0 systemd-sysv-generator[53214]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:58:14 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 08 15:58:16 compute-0 podman[54767]: 2025-10-08 15:58:16.548895896 +0000 UTC m=+0.138674541 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 08 15:58:17 compute-0 systemd[1]: Starting PackageKit Daemon...
Oct 08 15:58:17 compute-0 PackageKit[55591]: daemon start
Oct 08 15:58:17 compute-0 systemd[1]: Started PackageKit Daemon.
Oct 08 15:58:17 compute-0 sudo[35081]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:19 compute-0 sudo[57609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuxpseynoxrhfhsoknboqlnqhvvwgysg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939098.6325238-652-151897388083650/AnsiballZ_systemd.py'
Oct 08 15:58:19 compute-0 sudo[57609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:19 compute-0 python3.9[57630]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 08 15:58:19 compute-0 systemd[1]: Reloading.
Oct 08 15:58:19 compute-0 systemd-rc-local-generator[58022]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:58:19 compute-0 systemd-sysv-generator[58025]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:58:19 compute-0 sudo[57609]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:20 compute-0 sudo[58709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmcaqwxdedvizvbhbxhdobtavguwidob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939100.0599742-652-125905166975488/AnsiballZ_systemd.py'
Oct 08 15:58:20 compute-0 sudo[58709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:20 compute-0 python3.9[58724]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 08 15:58:20 compute-0 systemd[1]: Reloading.
Oct 08 15:58:20 compute-0 systemd-rc-local-generator[59098]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:58:20 compute-0 systemd-sysv-generator[59101]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:58:21 compute-0 sudo[58709]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:21 compute-0 sudo[59785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coiyqymmqbvxlqiztcjgpfyxbferlonz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939101.1804636-652-62255291542005/AnsiballZ_systemd.py'
Oct 08 15:58:21 compute-0 sudo[59785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:21 compute-0 python3.9[59807]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 08 15:58:21 compute-0 systemd[1]: Reloading.
Oct 08 15:58:21 compute-0 systemd-rc-local-generator[60287]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:58:21 compute-0 systemd-sysv-generator[60291]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:58:22 compute-0 sudo[59785]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:22 compute-0 sudo[60975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqcydwtskqhmzjyvuyxislerjqgyplqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939102.2761521-652-98428651254664/AnsiballZ_systemd.py'
Oct 08 15:58:22 compute-0 sudo[60975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:22 compute-0 python3.9[60997]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 08 15:58:22 compute-0 systemd[1]: Reloading.
Oct 08 15:58:23 compute-0 systemd-rc-local-generator[61419]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:58:23 compute-0 systemd-sysv-generator[61424]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:58:23 compute-0 sudo[60975]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:23 compute-0 sudo[62056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raixtviwvrlmgxizzrjevzjowmbkidhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939103.4303305-710-148440115858776/AnsiballZ_systemd.py'
Oct 08 15:58:23 compute-0 sudo[62056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:24 compute-0 python3.9[62071]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 15:58:24 compute-0 systemd[1]: Reloading.
Oct 08 15:58:24 compute-0 systemd-sysv-generator[62437]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:58:24 compute-0 systemd-rc-local-generator[62433]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:58:24 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 08 15:58:24 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 08 15:58:24 compute-0 systemd[1]: man-db-cache-update.service: Consumed 12.132s CPU time.
Oct 08 15:58:24 compute-0 systemd[1]: run-r0ba3db2a80984291a2af3727d438455d.service: Deactivated successfully.
Oct 08 15:58:24 compute-0 sudo[62056]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:24 compute-0 sudo[62590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbihwxloqyyhchgiqwriewifxsikasqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939104.615222-710-82877697345379/AnsiballZ_systemd.py'
Oct 08 15:58:24 compute-0 sudo[62590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:25 compute-0 python3.9[62592]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 15:58:25 compute-0 systemd[1324]: Created slice User Background Tasks Slice.
Oct 08 15:58:25 compute-0 systemd[1324]: Starting Cleanup of User's Temporary Files and Directories...
Oct 08 15:58:25 compute-0 systemd[1]: Reloading.
Oct 08 15:58:25 compute-0 systemd[1324]: Finished Cleanup of User's Temporary Files and Directories.
Oct 08 15:58:25 compute-0 systemd-sysv-generator[62627]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:58:25 compute-0 systemd-rc-local-generator[62624]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:58:25 compute-0 sudo[62590]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:26 compute-0 sudo[62781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-excnckzabsoeyxelupbngpmicbjlnfgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939105.8066647-710-139097301187050/AnsiballZ_systemd.py'
Oct 08 15:58:26 compute-0 sudo[62781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:26 compute-0 python3.9[62783]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 15:58:26 compute-0 systemd[1]: Reloading.
Oct 08 15:58:26 compute-0 systemd-rc-local-generator[62814]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:58:26 compute-0 systemd-sysv-generator[62817]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:58:26 compute-0 sudo[62781]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:27 compute-0 sudo[62970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrytcedkjrgttliyhhbfxksqypewryot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939106.923729-710-86779137040564/AnsiballZ_systemd.py'
Oct 08 15:58:27 compute-0 sudo[62970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:27 compute-0 python3.9[62972]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 15:58:27 compute-0 sudo[62970]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:28 compute-0 sudo[63125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pneeaefgkxkrbaswttgsgkewjwnsvvis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939107.7803864-710-84357451773889/AnsiballZ_systemd.py'
Oct 08 15:58:28 compute-0 sudo[63125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:28 compute-0 python3.9[63127]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 15:58:28 compute-0 systemd[1]: Reloading.
Oct 08 15:58:28 compute-0 systemd-rc-local-generator[63159]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:58:28 compute-0 systemd-sysv-generator[63163]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:58:28 compute-0 sudo[63125]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:29 compute-0 sudo[63316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muruuicuhgcnjrwbiubdxjjvltztwflc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939109.043854-782-275280591494638/AnsiballZ_systemd.py'
Oct 08 15:58:29 compute-0 sudo[63316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:29 compute-0 python3.9[63318]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 08 15:58:29 compute-0 systemd[1]: Reloading.
Oct 08 15:58:29 compute-0 systemd-rc-local-generator[63349]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:58:29 compute-0 systemd-sysv-generator[63352]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:58:30 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 08 15:58:30 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 08 15:58:30 compute-0 sudo[63316]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:30 compute-0 sudo[63509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujjlnavrensibdwyjegwwgskxwyijhdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939110.3417714-798-181971175749548/AnsiballZ_systemd.py'
Oct 08 15:58:30 compute-0 sudo[63509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:30 compute-0 python3.9[63511]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 15:58:31 compute-0 sudo[63509]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:31 compute-0 sudo[63664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwxekrbviitlwhghxcsiifmxuhpvlpmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939111.2174642-798-205005235240305/AnsiballZ_systemd.py'
Oct 08 15:58:31 compute-0 sudo[63664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:31 compute-0 python3.9[63666]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 15:58:31 compute-0 sudo[63664]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:32 compute-0 sudo[63819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inhftxrbbyrbyjkuizewarscanblrvey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939112.066963-798-163552107436188/AnsiballZ_systemd.py'
Oct 08 15:58:32 compute-0 sudo[63819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:32 compute-0 python3.9[63821]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 15:58:32 compute-0 sudo[63819]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:33 compute-0 sudo[63974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbinuwgszewxzliqmmoeqbwuytxzbice ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939112.8925104-798-276542421701425/AnsiballZ_systemd.py'
Oct 08 15:58:33 compute-0 sudo[63974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:33 compute-0 python3.9[63976]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 15:58:33 compute-0 sudo[63974]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:34 compute-0 sudo[64129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrrjvwwijztevqvcydsopvfrswwnnvde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939113.7658107-798-120748942785285/AnsiballZ_systemd.py'
Oct 08 15:58:34 compute-0 sudo[64129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:34 compute-0 python3.9[64131]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 15:58:34 compute-0 sudo[64129]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:35 compute-0 sudo[64284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qirwuxjwjlymulmgljebvrwksgerdbkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939114.6823986-798-277808015990468/AnsiballZ_systemd.py'
Oct 08 15:58:35 compute-0 sudo[64284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:35 compute-0 python3.9[64286]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 15:58:35 compute-0 sudo[64284]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:35 compute-0 sudo[64439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcgbiqogqhvjrlvuzweyshqsavfzjmpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939115.5811658-798-10302010673109/AnsiballZ_systemd.py'
Oct 08 15:58:35 compute-0 sudo[64439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:36 compute-0 python3.9[64441]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 15:58:36 compute-0 sudo[64439]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:36 compute-0 sudo[64594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unbozqtlotjlotfsjskvhwgmuokrpnys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939116.4073076-798-276078311418937/AnsiballZ_systemd.py'
Oct 08 15:58:36 compute-0 sudo[64594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:37 compute-0 python3.9[64596]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 15:58:38 compute-0 sudo[64594]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:38 compute-0 sudo[64749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erwoqcnpoufmjsposahezqluabjnontx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939118.2562575-798-52349946550470/AnsiballZ_systemd.py'
Oct 08 15:58:38 compute-0 sudo[64749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:38 compute-0 python3.9[64751]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 15:58:39 compute-0 sudo[64749]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:39 compute-0 sudo[64904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcxftmhchnsabkbpugapkzetngimcxrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939119.156524-798-202368161993855/AnsiballZ_systemd.py'
Oct 08 15:58:39 compute-0 sudo[64904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:39 compute-0 python3.9[64906]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 15:58:39 compute-0 sudo[64904]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:40 compute-0 sudo[65059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naqjfqgjawxwhfsxbnzwmmleotjqefus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939120.0139835-798-224498940636862/AnsiballZ_systemd.py'
Oct 08 15:58:40 compute-0 sudo[65059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:40 compute-0 python3.9[65061]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 15:58:40 compute-0 sudo[65059]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:41 compute-0 sudo[65214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ielmlydbscwddnmhceltrmlswgynnafb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939120.880417-798-183743953671553/AnsiballZ_systemd.py'
Oct 08 15:58:41 compute-0 sudo[65214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:41 compute-0 python3.9[65216]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 15:58:41 compute-0 sudo[65214]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:58:41.847 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 15:58:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:58:41.848 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 15:58:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:58:41.848 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 15:58:42 compute-0 sudo[65370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fchyjzowwdmexxvkdxhqvgwtgnfyyeeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939121.7094564-798-119881471698983/AnsiballZ_systemd.py'
Oct 08 15:58:42 compute-0 sudo[65370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:42 compute-0 python3.9[65372]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 15:58:42 compute-0 sudo[65370]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:42 compute-0 podman[65374]: 2025-10-08 15:58:42.445119218 +0000 UTC m=+0.074712615 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct 08 15:58:42 compute-0 sudo[65543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwguiqeasptldzrcivpyphgiydszunon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939122.59188-798-197686507822411/AnsiballZ_systemd.py'
Oct 08 15:58:42 compute-0 sudo[65543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:43 compute-0 python3.9[65545]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 08 15:58:43 compute-0 sudo[65543]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:45 compute-0 sudo[65698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilhngiihtttucqcognjlisivpbwwtumz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939124.688622-1002-237027926870294/AnsiballZ_file.py'
Oct 08 15:58:45 compute-0 sudo[65698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:45 compute-0 python3.9[65700]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:58:45 compute-0 sudo[65698]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:45 compute-0 sudo[65850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqernjjbamexhmfaegspammbgmgrapoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939125.4409058-1002-121925376159369/AnsiballZ_file.py'
Oct 08 15:58:45 compute-0 sudo[65850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:45 compute-0 python3.9[65852]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:58:45 compute-0 sudo[65850]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:46 compute-0 sudo[66002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfzsxwtkukovglreityagovzwkwzqoem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939126.1616113-1002-32834726949792/AnsiballZ_file.py'
Oct 08 15:58:46 compute-0 sudo[66002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:46 compute-0 python3.9[66004]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:58:46 compute-0 sudo[66002]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:47 compute-0 sudo[66172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghqnmlatvcgkpasijqowrywwlogsdyfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939126.9353156-1002-20165106323075/AnsiballZ_file.py'
Oct 08 15:58:47 compute-0 sudo[66172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:47 compute-0 podman[66128]: 2025-10-08 15:58:47.305112372 +0000 UTC m=+0.101548474 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 08 15:58:47 compute-0 python3.9[66177]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:58:47 compute-0 sudo[66172]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:47 compute-0 sudo[66333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lypjnsxmmsqjakecqxagkhifopoolovu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939127.6259708-1002-54977284638188/AnsiballZ_file.py'
Oct 08 15:58:47 compute-0 sudo[66333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:48 compute-0 python3.9[66335]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:58:48 compute-0 sudo[66333]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:48 compute-0 sudo[66485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhbkorjijpigfeclntqmiojrsuypreqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939128.2729692-1002-58850206388797/AnsiballZ_file.py'
Oct 08 15:58:48 compute-0 sudo[66485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:48 compute-0 python3.9[66487]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 08 15:58:48 compute-0 sudo[66485]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:49 compute-0 sudo[66637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzojbegizhafvcjbqxopvejatjllvysf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939128.9998384-1088-222968041806225/AnsiballZ_stat.py'
Oct 08 15:58:49 compute-0 sudo[66637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:49 compute-0 python3.9[66639]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:58:49 compute-0 sudo[66637]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:50 compute-0 sudo[66762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibhdhollyjeytyibyucdirvykqzlvpma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939128.9998384-1088-222968041806225/AnsiballZ_copy.py'
Oct 08 15:58:50 compute-0 sudo[66762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:50 compute-0 python3.9[66764]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759939128.9998384-1088-222968041806225/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:58:50 compute-0 sudo[66762]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:50 compute-0 sudo[66914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpjqdiggncqjcevjacqjqbbegkpfvqfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939130.6117425-1088-128746366065841/AnsiballZ_stat.py'
Oct 08 15:58:50 compute-0 sudo[66914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:51 compute-0 python3.9[66916]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:58:51 compute-0 sudo[66914]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:51 compute-0 sudo[67039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-souphssowpwkkejnktfhjwqlekdtugrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939130.6117425-1088-128746366065841/AnsiballZ_copy.py'
Oct 08 15:58:51 compute-0 sudo[67039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:51 compute-0 python3.9[67041]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759939130.6117425-1088-128746366065841/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:58:51 compute-0 sudo[67039]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:52 compute-0 sudo[67191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgwyfaqfndunltbgzwbwqmpckdpmqyse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939131.9212418-1088-73502035517299/AnsiballZ_stat.py'
Oct 08 15:58:52 compute-0 sudo[67191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:52 compute-0 python3.9[67193]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:58:52 compute-0 sudo[67191]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:52 compute-0 sudo[67316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yasonafkzmsugjjeltccmehfnncdrwfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939131.9212418-1088-73502035517299/AnsiballZ_copy.py'
Oct 08 15:58:52 compute-0 sudo[67316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:53 compute-0 python3.9[67318]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759939131.9212418-1088-73502035517299/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:58:53 compute-0 sudo[67316]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:53 compute-0 sudo[67468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmjdvfkaystqhiotqttxuxwpogolvkhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939133.2673492-1088-274958400323553/AnsiballZ_stat.py'
Oct 08 15:58:53 compute-0 sudo[67468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:53 compute-0 python3.9[67470]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:58:53 compute-0 sudo[67468]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:54 compute-0 sudo[67593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqmliknmjjwhuhvtwjrxrtdbqiopjhww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939133.2673492-1088-274958400323553/AnsiballZ_copy.py'
Oct 08 15:58:54 compute-0 sudo[67593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:54 compute-0 python3.9[67595]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759939133.2673492-1088-274958400323553/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:58:54 compute-0 sudo[67593]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:54 compute-0 sudo[67745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brsbwctevmkqrrnvzvzakwfjalkkkwlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939134.5977354-1088-273445777547024/AnsiballZ_stat.py'
Oct 08 15:58:54 compute-0 sudo[67745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:55 compute-0 python3.9[67747]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:58:55 compute-0 sudo[67745]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:55 compute-0 sudo[67870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eymkxmxrshxrplugkgjojsggplshszli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939134.5977354-1088-273445777547024/AnsiballZ_copy.py'
Oct 08 15:58:55 compute-0 sudo[67870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:55 compute-0 python3.9[67872]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759939134.5977354-1088-273445777547024/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:58:55 compute-0 sudo[67870]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:56 compute-0 sudo[68022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjkieiylreknrsrmgojwqowupntzrwjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939135.9181583-1088-81470719276823/AnsiballZ_stat.py'
Oct 08 15:58:56 compute-0 sudo[68022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:56 compute-0 python3.9[68024]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:58:56 compute-0 sudo[68022]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:56 compute-0 sudo[68147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auizsrywbojdhlsicdlekwflnoeqkdth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939135.9181583-1088-81470719276823/AnsiballZ_copy.py'
Oct 08 15:58:56 compute-0 sudo[68147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:57 compute-0 python3.9[68149]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759939135.9181583-1088-81470719276823/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:58:57 compute-0 sudo[68147]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:57 compute-0 sudo[68299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdpynwovzrxtlehhaaergxlxbfadyhon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939137.2613401-1088-63328803568984/AnsiballZ_stat.py'
Oct 08 15:58:57 compute-0 sudo[68299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:57 compute-0 python3.9[68301]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:58:57 compute-0 sudo[68299]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:58 compute-0 sudo[68422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekiayxyxeuzapqcbwbdvznknalepzxgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939137.2613401-1088-63328803568984/AnsiballZ_copy.py'
Oct 08 15:58:58 compute-0 sudo[68422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:58 compute-0 python3.9[68424]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759939137.2613401-1088-63328803568984/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:58:58 compute-0 sudo[68422]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:58 compute-0 sudo[68574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruvgfhoheuncwdynsnxnosnmzchuvsoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939138.5334136-1088-24543899181796/AnsiballZ_stat.py'
Oct 08 15:58:58 compute-0 sudo[68574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:59 compute-0 python3.9[68576]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:58:59 compute-0 sudo[68574]: pam_unix(sudo:session): session closed for user root
Oct 08 15:58:59 compute-0 sudo[68699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukytovpodkjzvroegsimktspkcqnpaht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939138.5334136-1088-24543899181796/AnsiballZ_copy.py'
Oct 08 15:58:59 compute-0 sudo[68699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:58:59 compute-0 python3.9[68701]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759939138.5334136-1088-24543899181796/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:58:59 compute-0 sudo[68699]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:00 compute-0 sudo[68851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynvnoipwlrcbcberxdbugnueruobqqzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939139.8601887-1314-267041658118465/AnsiballZ_command.py'
Oct 08 15:59:00 compute-0 sudo[68851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:00 compute-0 python3.9[68853]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 08 15:59:00 compute-0 sudo[68851]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:00 compute-0 sudo[69005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uejkhzhkfdoximividrvmuopkghjnfvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939140.6452746-1332-175397656905727/AnsiballZ_file.py'
Oct 08 15:59:00 compute-0 sudo[69005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:01 compute-0 python3.9[69007]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:01 compute-0 sudo[69005]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:01 compute-0 sudo[69157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lykuesuaaqmnvwdqdajsknfwrtlgxxgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939141.3444178-1332-111643613817046/AnsiballZ_file.py'
Oct 08 15:59:01 compute-0 sudo[69157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:01 compute-0 python3.9[69159]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:01 compute-0 sudo[69157]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:02 compute-0 sudo[69309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raddstmmbaekjopyljcqfnhdrypbpvjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939142.0749156-1332-233678717439326/AnsiballZ_file.py'
Oct 08 15:59:02 compute-0 sudo[69309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:02 compute-0 python3.9[69311]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:02 compute-0 sudo[69309]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:03 compute-0 sudo[69461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlkifodkwoudleiiuvkqhaeggimmhqvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939142.7765698-1332-90510710748/AnsiballZ_file.py'
Oct 08 15:59:03 compute-0 sudo[69461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:03 compute-0 python3.9[69463]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:03 compute-0 sudo[69461]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:03 compute-0 sudo[69613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flvadufdcbygfqhujeazhvbprwqizhbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939143.476792-1332-13172614439826/AnsiballZ_file.py'
Oct 08 15:59:03 compute-0 sudo[69613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:03 compute-0 python3.9[69615]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:04 compute-0 sudo[69613]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:04 compute-0 sudo[69765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apnjrbbreqhjrhlmlhnacecurfwthesm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939144.1422384-1332-260306351377023/AnsiballZ_file.py'
Oct 08 15:59:04 compute-0 sudo[69765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:04 compute-0 python3.9[69767]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:04 compute-0 sudo[69765]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:05 compute-0 sudo[69917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyqmusesvexqsrsnmcdtzjtuextowhzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939144.8996382-1332-181506597486113/AnsiballZ_file.py'
Oct 08 15:59:05 compute-0 sudo[69917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:05 compute-0 python3.9[69919]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:05 compute-0 sudo[69917]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:05 compute-0 sudo[70069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbpghkerycjylqvsmlblvbogkzwbuvwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939145.574101-1332-36034993039647/AnsiballZ_file.py'
Oct 08 15:59:05 compute-0 sudo[70069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:06 compute-0 python3.9[70071]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:06 compute-0 sudo[70069]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:06 compute-0 sudo[70221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uavqbifafhysovnbaxtgbrjzebvslgje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939146.2713804-1332-118193399590783/AnsiballZ_file.py'
Oct 08 15:59:06 compute-0 sudo[70221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:06 compute-0 python3.9[70223]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:06 compute-0 sudo[70221]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:07 compute-0 sudo[70373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfikbeyjvftzrfdnqdlzilkzsqwxwrpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939146.9650753-1332-32167085468982/AnsiballZ_file.py'
Oct 08 15:59:07 compute-0 sudo[70373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:07 compute-0 python3.9[70375]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:07 compute-0 sudo[70373]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:07 compute-0 sudo[70525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpzbyxgqklghizpywalbncmpcdobjwze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939147.6037035-1332-158627232030033/AnsiballZ_file.py'
Oct 08 15:59:07 compute-0 sudo[70525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:08 compute-0 python3.9[70527]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:08 compute-0 sudo[70525]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:08 compute-0 sudo[70677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbieaykscsxfcstidclxkjqmpudaeduo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939148.374471-1332-94872186066022/AnsiballZ_file.py'
Oct 08 15:59:08 compute-0 sudo[70677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:08 compute-0 python3.9[70679]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:08 compute-0 sudo[70677]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:09 compute-0 sudo[70829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjshmsgpmqerrrkzlkydfzgxisjnqddz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939149.0066097-1332-101812436305730/AnsiballZ_file.py'
Oct 08 15:59:09 compute-0 sudo[70829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:09 compute-0 python3.9[70831]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:09 compute-0 sudo[70829]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:10 compute-0 sudo[70981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tglmmhyzfwdxuncghficeuertztfybkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939149.7317722-1332-128954812270485/AnsiballZ_file.py'
Oct 08 15:59:10 compute-0 sudo[70981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:10 compute-0 python3.9[70983]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:10 compute-0 sshd-session[68854]: Connection closed by 20.65.145.247 port 46770
Oct 08 15:59:10 compute-0 sudo[70981]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:10 compute-0 sshd-session[70997]: banner exchange: Connection from 20.65.145.247 port 38714: invalid format
Oct 08 15:59:10 compute-0 sudo[71134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuerlvafkhjselaodivlxnlbyxpwkeku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939150.4927752-1530-276875714639625/AnsiballZ_stat.py'
Oct 08 15:59:10 compute-0 sudo[71134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:10 compute-0 python3.9[71136]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:10 compute-0 sudo[71134]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:11 compute-0 sudo[71257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gguxbbzlhahelspjgnhzdksfxvjdgriu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939150.4927752-1530-276875714639625/AnsiballZ_copy.py'
Oct 08 15:59:11 compute-0 sudo[71257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:11 compute-0 python3.9[71259]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759939150.4927752-1530-276875714639625/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:11 compute-0 sudo[71257]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:11 compute-0 sudo[71409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgvubgojnlrwbljakiavewvabpdeucfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939151.713648-1530-143304549522427/AnsiballZ_stat.py'
Oct 08 15:59:11 compute-0 sudo[71409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:12 compute-0 python3.9[71411]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:12 compute-0 sudo[71409]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:12 compute-0 sudo[71545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cngremmflbzjfeaiwudsewlxzhpbihek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939151.713648-1530-143304549522427/AnsiballZ_copy.py'
Oct 08 15:59:12 compute-0 sudo[71545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:12 compute-0 podman[71506]: 2025-10-08 15:59:12.649645439 +0000 UTC m=+0.063005036 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251007)
Oct 08 15:59:12 compute-0 python3.9[71551]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759939151.713648-1530-143304549522427/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:12 compute-0 sudo[71545]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:13 compute-0 sudo[71702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tctaikmvajiuzvfclkonikvbbdiorpox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939153.1767333-1530-86715692338352/AnsiballZ_stat.py'
Oct 08 15:59:13 compute-0 sudo[71702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:13 compute-0 python3.9[71704]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:13 compute-0 sudo[71702]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:14 compute-0 sudo[71825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aitcyqaasxwplfckhwdavkbxnsyhdbhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939153.1767333-1530-86715692338352/AnsiballZ_copy.py'
Oct 08 15:59:14 compute-0 sudo[71825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:14 compute-0 python3.9[71827]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759939153.1767333-1530-86715692338352/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:14 compute-0 sudo[71825]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:14 compute-0 sudo[71977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpzvpbtggghvtjibvfugucfzmfdceifd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939154.3923647-1530-134512104707247/AnsiballZ_stat.py'
Oct 08 15:59:14 compute-0 sudo[71977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:14 compute-0 python3.9[71979]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:14 compute-0 sudo[71977]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:15 compute-0 sudo[72100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtqcrxmltzwtyrjbupmthtnlflouwluz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939154.3923647-1530-134512104707247/AnsiballZ_copy.py'
Oct 08 15:59:15 compute-0 sudo[72100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:15 compute-0 python3.9[72102]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759939154.3923647-1530-134512104707247/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:15 compute-0 sudo[72100]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:15 compute-0 sudo[72252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndikphkwqtmhmpgphvsjzxbcdfhiqhac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939155.6747656-1530-277596843464490/AnsiballZ_stat.py'
Oct 08 15:59:15 compute-0 sudo[72252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:16 compute-0 python3.9[72254]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:16 compute-0 sudo[72252]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:16 compute-0 sudo[72375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nddzfcdpxakdrkdrkzrmbthmdabpcexg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939155.6747656-1530-277596843464490/AnsiballZ_copy.py'
Oct 08 15:59:16 compute-0 sudo[72375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:16 compute-0 python3.9[72377]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759939155.6747656-1530-277596843464490/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:16 compute-0 sudo[72375]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:17 compute-0 sudo[72527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnlllbwttqsvhbtmzwitvmsefqkwwnzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939156.929451-1530-203423423743903/AnsiballZ_stat.py'
Oct 08 15:59:17 compute-0 sudo[72527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:17 compute-0 python3.9[72529]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:17 compute-0 sudo[72527]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:17 compute-0 podman[72530]: 2025-10-08 15:59:17.49027402 +0000 UTC m=+0.096653392 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 08 15:59:17 compute-0 sudo[72677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sovrdovgczpzgcfrfvrehuixaoadnrwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939156.929451-1530-203423423743903/AnsiballZ_copy.py'
Oct 08 15:59:17 compute-0 sudo[72677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:17 compute-0 python3.9[72679]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759939156.929451-1530-203423423743903/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:17 compute-0 sudo[72677]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:18 compute-0 sudo[72829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brvqospgdpkhnmgvdxvbklhspiamzyin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939158.1410103-1530-203354276127185/AnsiballZ_stat.py'
Oct 08 15:59:18 compute-0 sudo[72829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:18 compute-0 python3.9[72831]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:18 compute-0 sudo[72829]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:18 compute-0 sudo[72952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjcgqkvjxyzdrwolxkwowunxataxiccq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939158.1410103-1530-203354276127185/AnsiballZ_copy.py'
Oct 08 15:59:18 compute-0 sudo[72952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:19 compute-0 python3.9[72954]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759939158.1410103-1530-203354276127185/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:19 compute-0 sudo[72952]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:19 compute-0 sudo[73104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbbptgpyxiuuiamolkynonljszsqvpsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939159.3063982-1530-187573697066579/AnsiballZ_stat.py'
Oct 08 15:59:19 compute-0 sudo[73104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:19 compute-0 python3.9[73106]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:19 compute-0 sudo[73104]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:20 compute-0 sudo[73227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nudhmzmxlevzioefxufwxuqmzfdaknfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939159.3063982-1530-187573697066579/AnsiballZ_copy.py'
Oct 08 15:59:20 compute-0 sudo[73227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:20 compute-0 python3.9[73229]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759939159.3063982-1530-187573697066579/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:20 compute-0 sudo[73227]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:20 compute-0 sudo[73379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pebnchgfyvkvvmqjgbmopgeelxwzpgcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939160.552568-1530-180940002881498/AnsiballZ_stat.py'
Oct 08 15:59:20 compute-0 sudo[73379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:21 compute-0 python3.9[73381]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:21 compute-0 sudo[73379]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:21 compute-0 sudo[73502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kggrimrnztiyoflapkoquswfnvrwerxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939160.552568-1530-180940002881498/AnsiballZ_copy.py'
Oct 08 15:59:21 compute-0 sudo[73502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:21 compute-0 python3.9[73504]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759939160.552568-1530-180940002881498/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:21 compute-0 sudo[73502]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:22 compute-0 sudo[73654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spmigynpoxqsdnpflqhaumfabifrfptl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939161.8253152-1530-84088581998897/AnsiballZ_stat.py'
Oct 08 15:59:22 compute-0 sudo[73654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:22 compute-0 python3.9[73656]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:22 compute-0 sudo[73654]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:22 compute-0 sudo[73777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpzzihwpklekaocxqxxjeyuxurwrvwpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939161.8253152-1530-84088581998897/AnsiballZ_copy.py'
Oct 08 15:59:22 compute-0 sudo[73777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:22 compute-0 python3.9[73779]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759939161.8253152-1530-84088581998897/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:22 compute-0 sudo[73777]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:23 compute-0 sudo[73929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnrpfyoiwrjbossslhbkcysozoewovyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939163.1017797-1530-226587912194336/AnsiballZ_stat.py'
Oct 08 15:59:23 compute-0 sudo[73929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:23 compute-0 python3.9[73931]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:23 compute-0 sudo[73929]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:24 compute-0 sudo[74052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbmzqaofkibsukngokbeqkdqrnqczpul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939163.1017797-1530-226587912194336/AnsiballZ_copy.py'
Oct 08 15:59:24 compute-0 sudo[74052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:24 compute-0 python3.9[74054]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759939163.1017797-1530-226587912194336/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:24 compute-0 sudo[74052]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:24 compute-0 sudo[74204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guscybajwcbfujarunowboxkuexevnvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939164.4592013-1530-115669623669785/AnsiballZ_stat.py'
Oct 08 15:59:24 compute-0 sudo[74204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:24 compute-0 python3.9[74206]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:24 compute-0 sudo[74204]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:25 compute-0 sudo[74327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuaawxrzkygvwiojwjplcbidxgeekplg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939164.4592013-1530-115669623669785/AnsiballZ_copy.py'
Oct 08 15:59:25 compute-0 sudo[74327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:25 compute-0 python3.9[74329]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759939164.4592013-1530-115669623669785/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:25 compute-0 sudo[74327]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:25 compute-0 sudo[74479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huvopnhyjavdursggdukmqxxsplxuuqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939165.7235742-1530-122326578149788/AnsiballZ_stat.py'
Oct 08 15:59:26 compute-0 sudo[74479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:26 compute-0 python3.9[74481]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:26 compute-0 sudo[74479]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:26 compute-0 sudo[74602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjthrwuptmuphmprwhfdmgbkfnwtkcws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939165.7235742-1530-122326578149788/AnsiballZ_copy.py'
Oct 08 15:59:26 compute-0 sudo[74602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:26 compute-0 python3.9[74604]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759939165.7235742-1530-122326578149788/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:26 compute-0 sudo[74602]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:27 compute-0 sudo[74754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ystfcfiuuxmmstngircklylniklzrgmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939167.0555682-1530-4838951619233/AnsiballZ_stat.py'
Oct 08 15:59:27 compute-0 sudo[74754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:27 compute-0 python3.9[74756]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:27 compute-0 sudo[74754]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:27 compute-0 sudo[74877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edmkxdxbemlraikdtybbebbibvknyakd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939167.0555682-1530-4838951619233/AnsiballZ_copy.py'
Oct 08 15:59:27 compute-0 sudo[74877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:28 compute-0 python3.9[74879]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759939167.0555682-1530-4838951619233/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:28 compute-0 sudo[74877]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:28 compute-0 python3.9[75029]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 15:59:29 compute-0 sudo[75182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylycufevcwmdxcndhuetqovzzpibiyeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939169.0167675-1942-144989386354838/AnsiballZ_seboolean.py'
Oct 08 15:59:29 compute-0 sudo[75182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:29 compute-0 python3.9[75184]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 08 15:59:30 compute-0 sudo[75182]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:31 compute-0 sudo[75338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvvmygdrxetfvoejjgwewblwppozrxub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939171.6269314-1958-98873783623652/AnsiballZ_copy.py'
Oct 08 15:59:31 compute-0 dbus-broker-launch[839]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 08 15:59:31 compute-0 sudo[75338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:32 compute-0 python3.9[75340]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:32 compute-0 sudo[75338]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:32 compute-0 sudo[75490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjfwzbbimgkcofyvvzqnsyheiqsfkjav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939172.343582-1958-260343301357056/AnsiballZ_copy.py'
Oct 08 15:59:32 compute-0 sudo[75490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:32 compute-0 python3.9[75492]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:32 compute-0 sudo[75490]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:33 compute-0 sudo[75642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yefukrlxvhuicthvrvltyhcvlxowcish ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939173.066539-1958-179870959249033/AnsiballZ_copy.py'
Oct 08 15:59:33 compute-0 sudo[75642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:33 compute-0 python3.9[75644]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:33 compute-0 sudo[75642]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:34 compute-0 sudo[75794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysylbwgdoprjpzrfosmazgjhbgwpkxux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939173.7395597-1958-52547907619668/AnsiballZ_copy.py'
Oct 08 15:59:34 compute-0 sudo[75794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:34 compute-0 python3.9[75796]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:34 compute-0 sudo[75794]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:34 compute-0 sudo[75946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryhgsviubtwqthxmsliilyqmdlpuxues ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939174.3859808-1958-117498397620848/AnsiballZ_copy.py'
Oct 08 15:59:34 compute-0 sudo[75946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:34 compute-0 python3.9[75948]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:34 compute-0 sudo[75946]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:35 compute-0 sudo[76098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lshvobrlntdvvezhuzambsitbtnonwlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939175.136365-2030-224099514573341/AnsiballZ_copy.py'
Oct 08 15:59:35 compute-0 sudo[76098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:35 compute-0 python3.9[76100]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:35 compute-0 sudo[76098]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:36 compute-0 sudo[76250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxfmrduwprbptwoqrqcukqoswwclynwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939175.8203218-2030-26945492898335/AnsiballZ_copy.py'
Oct 08 15:59:36 compute-0 sudo[76250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:36 compute-0 python3.9[76252]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:36 compute-0 sudo[76250]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:36 compute-0 sudo[76402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lukeajhehfuokmdkblfhcpkicvnwzeuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939176.5502002-2030-124081986771136/AnsiballZ_copy.py'
Oct 08 15:59:36 compute-0 sudo[76402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:37 compute-0 python3.9[76404]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:37 compute-0 sudo[76402]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:37 compute-0 sudo[76554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdavbwjumkpmybvibqhqesohxinsddoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939177.3094893-2030-197701129981822/AnsiballZ_copy.py'
Oct 08 15:59:37 compute-0 sudo[76554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:37 compute-0 python3.9[76556]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:37 compute-0 sudo[76554]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:38 compute-0 sudo[76706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldpouypohnmauepzndqkniuvuhillutx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939177.9603083-2030-276584239250972/AnsiballZ_copy.py'
Oct 08 15:59:38 compute-0 sudo[76706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:38 compute-0 python3.9[76708]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:38 compute-0 sudo[76706]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:39 compute-0 sudo[76858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffqzlzgumvduqmeihcofriiviiretbqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939178.689391-2102-155089659373673/AnsiballZ_systemd.py'
Oct 08 15:59:39 compute-0 sudo[76858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:39 compute-0 python3.9[76860]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 15:59:39 compute-0 systemd[1]: Reloading.
Oct 08 15:59:39 compute-0 systemd-rc-local-generator[76888]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:59:39 compute-0 systemd-sysv-generator[76892]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:59:39 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Oct 08 15:59:39 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Oct 08 15:59:39 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 08 15:59:39 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 08 15:59:39 compute-0 systemd[1]: Starting libvirt logging daemon...
Oct 08 15:59:39 compute-0 systemd[1]: Started libvirt logging daemon.
Oct 08 15:59:39 compute-0 sudo[76858]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:40 compute-0 sudo[77051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbfzjryzdffjwgyzktaoqboydwormcbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939179.9318862-2102-184924998257079/AnsiballZ_systemd.py'
Oct 08 15:59:40 compute-0 sudo[77051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:40 compute-0 python3.9[77053]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 15:59:40 compute-0 systemd[1]: Reloading.
Oct 08 15:59:40 compute-0 systemd-rc-local-generator[77080]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:59:40 compute-0 systemd-sysv-generator[77083]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:59:40 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 08 15:59:40 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 08 15:59:40 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 08 15:59:40 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 08 15:59:40 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 08 15:59:40 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 08 15:59:40 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 08 15:59:40 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 08 15:59:40 compute-0 sudo[77051]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:41 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 08 15:59:41 compute-0 sudo[77267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogrpzdmapmvwzqaxjprjvbjjggspsatj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939181.08741-2102-158935505287176/AnsiballZ_systemd.py'
Oct 08 15:59:41 compute-0 sudo[77267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:41 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 08 15:59:41 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 08 15:59:41 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 08 15:59:41 compute-0 python3.9[77269]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 15:59:41 compute-0 systemd[1]: Reloading.
Oct 08 15:59:41 compute-0 systemd-rc-local-generator[77303]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:59:41 compute-0 systemd-sysv-generator[77306]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:59:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:59:41.850 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 15:59:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:59:41.851 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 15:59:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 15:59:41.851 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 15:59:42 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 08 15:59:42 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 08 15:59:42 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 08 15:59:42 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 08 15:59:42 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 08 15:59:42 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 08 15:59:42 compute-0 sudo[77267]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:42 compute-0 sudo[77486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sleylvvjkqixbsmotphlpreynkckdous ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939182.2477-2102-244717169432885/AnsiballZ_systemd.py'
Oct 08 15:59:42 compute-0 sudo[77486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:42 compute-0 setroubleshoot[77141]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l d98df474-b89d-4e83-b26b-f607da58734f
Oct 08 15:59:42 compute-0 setroubleshoot[77141]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                 
                                                 *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                 
                                                 If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                 Then turn on full auditing to get path information about the offending file and generate the error again.
                                                 Do
                                                 
                                                 Turn on full auditing
                                                 # auditctl -w /etc/shadow -p w
                                                 Try to recreate AVC. Then execute
                                                 # ausearch -m avc -ts recent
                                                 If you see PATH record check ownership/permissions on file, and fix it,
                                                 otherwise report as a bugzilla.
                                                 
                                                 *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                 
                                                 If you believe that virtlogd should have the dac_read_search capability by default.
                                                 Then you should report this as a bug.
                                                 You can generate a local policy module to allow this access.
                                                 Do
                                                 allow this access for now by executing:
                                                 # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                 # semodule -X 300 -i my-virtlogd.pp
                                                 
Oct 08 15:59:42 compute-0 setroubleshoot[77141]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l d98df474-b89d-4e83-b26b-f607da58734f
Oct 08 15:59:42 compute-0 setroubleshoot[77141]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                 
                                                 *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                 
                                                 If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                 Then turn on full auditing to get path information about the offending file and generate the error again.
                                                 Do
                                                 
                                                 Turn on full auditing
                                                 # auditctl -w /etc/shadow -p w
                                                 Try to recreate AVC. Then execute
                                                 # ausearch -m avc -ts recent
                                                 If you see PATH record check ownership/permissions on file, and fix it,
                                                 otherwise report as a bugzilla.
                                                 
                                                 *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                 
                                                 If you believe that virtlogd should have the dac_read_search capability by default.
                                                 Then you should report this as a bug.
                                                 You can generate a local policy module to allow this access.
                                                 Do
                                                 allow this access for now by executing:
                                                 # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                 # semodule -X 300 -i my-virtlogd.pp
                                                 
Oct 08 15:59:42 compute-0 python3.9[77488]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 15:59:42 compute-0 systemd[1]: Reloading.
Oct 08 15:59:43 compute-0 systemd-sysv-generator[77539]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:59:43 compute-0 systemd-rc-local-generator[77536]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:59:43 compute-0 podman[77490]: 2025-10-08 15:59:43.032559956 +0000 UTC m=+0.090219668 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest)
Oct 08 15:59:43 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Oct 08 15:59:43 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 08 15:59:43 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 08 15:59:43 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 08 15:59:43 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 08 15:59:43 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 08 15:59:43 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 08 15:59:43 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 08 15:59:43 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 08 15:59:43 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 08 15:59:43 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 08 15:59:43 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 08 15:59:43 compute-0 sudo[77486]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:43 compute-0 sudo[77719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxpgszmacxjxljdjagdxzrmxxywffjku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939183.5303147-2102-15791923082570/AnsiballZ_systemd.py'
Oct 08 15:59:43 compute-0 sudo[77719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:44 compute-0 python3.9[77721]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 15:59:44 compute-0 systemd[1]: Reloading.
Oct 08 15:59:44 compute-0 systemd-rc-local-generator[77743]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 15:59:44 compute-0 systemd-sysv-generator[77752]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 15:59:44 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Oct 08 15:59:44 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Oct 08 15:59:44 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 08 15:59:44 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 08 15:59:44 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 08 15:59:44 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 08 15:59:44 compute-0 systemd[1]: Starting libvirt secret daemon...
Oct 08 15:59:44 compute-0 systemd[1]: Started libvirt secret daemon.
Oct 08 15:59:44 compute-0 sudo[77719]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:45 compute-0 sudo[77929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwftuoexuaxoabwqghfphxtrmwoobuag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939184.8583236-2176-126242505723044/AnsiballZ_file.py'
Oct 08 15:59:45 compute-0 sudo[77929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:45 compute-0 python3.9[77931]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:45 compute-0 sudo[77929]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:45 compute-0 sudo[78081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbefugdcjldthyhlsdttpsixisqxowce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939185.5595345-2192-63976846868094/AnsiballZ_find.py'
Oct 08 15:59:45 compute-0 sudo[78081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:46 compute-0 python3.9[78083]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 08 15:59:46 compute-0 sudo[78081]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:46 compute-0 sudo[78233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxxmnuiioceivnoivjntpxbzqayyzkqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939186.4692392-2220-275186026555559/AnsiballZ_stat.py'
Oct 08 15:59:46 compute-0 sudo[78233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:46 compute-0 python3.9[78235]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:46 compute-0 rsyslogd[1296]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 15:59:46 compute-0 rsyslogd[1296]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 15:59:46 compute-0 rsyslogd[1296]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 15:59:46 compute-0 sudo[78233]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:47 compute-0 sudo[78357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqgcksijeclxhsufxupwhspesxuddbnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939186.4692392-2220-275186026555559/AnsiballZ_copy.py'
Oct 08 15:59:47 compute-0 sudo[78357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:47 compute-0 python3.9[78359]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939186.4692392-2220-275186026555559/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:47 compute-0 sudo[78357]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:48 compute-0 sudo[78520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnoybzgjoydtpxptflaburdtcexxydiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939187.9316037-2252-85205443035839/AnsiballZ_file.py'
Oct 08 15:59:48 compute-0 sudo[78520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:48 compute-0 podman[78483]: 2025-10-08 15:59:48.306341312 +0000 UTC m=+0.117508996 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 08 15:59:48 compute-0 python3.9[78528]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:48 compute-0 sudo[78520]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:48 compute-0 sudo[78687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ownwkgvsyndusyifltpywwagjbuvlgqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939188.6297178-2268-55529398959941/AnsiballZ_stat.py'
Oct 08 15:59:48 compute-0 sudo[78687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:49 compute-0 python3.9[78689]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:49 compute-0 sudo[78687]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:49 compute-0 sudo[78765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ramteilqyvqobmnizmtftfptdxowazzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939188.6297178-2268-55529398959941/AnsiballZ_file.py'
Oct 08 15:59:49 compute-0 sudo[78765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:49 compute-0 python3.9[78767]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:49 compute-0 sudo[78765]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:50 compute-0 sudo[78917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfrddltmuobvnswgkfxnahidzytolyxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939189.8613555-2292-107947791480063/AnsiballZ_stat.py'
Oct 08 15:59:50 compute-0 sudo[78917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:50 compute-0 python3.9[78919]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:50 compute-0 sudo[78917]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:50 compute-0 sudo[78995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuuxsomdwqbbkqdbeytfdyldpdotxqbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939189.8613555-2292-107947791480063/AnsiballZ_file.py'
Oct 08 15:59:50 compute-0 sudo[78995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:50 compute-0 python3.9[78997]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.y_cpxjv8 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:50 compute-0 sudo[78995]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:51 compute-0 sudo[79147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtkypjwgdmqoxvzactpgdcrspykvpwhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939191.0029793-2316-278347609240754/AnsiballZ_stat.py'
Oct 08 15:59:51 compute-0 sudo[79147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:51 compute-0 python3.9[79149]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:51 compute-0 sudo[79147]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:51 compute-0 sudo[79225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqkfwmqajrklilnuiphvqubahckrxrfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939191.0029793-2316-278347609240754/AnsiballZ_file.py'
Oct 08 15:59:51 compute-0 sudo[79225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:51 compute-0 python3.9[79227]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:51 compute-0 sudo[79225]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:52 compute-0 sudo[79377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvdmoemhxowvewykqmvdiiocacnsgttr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939192.1715896-2342-151461216954541/AnsiballZ_command.py'
Oct 08 15:59:52 compute-0 sudo[79377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:52 compute-0 python3.9[79379]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 15:59:52 compute-0 sudo[79377]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:52 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 08 15:59:52 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.008s CPU time.
Oct 08 15:59:52 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 08 15:59:53 compute-0 sudo[79530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyqyxsabhhqzdekyfklhafhuqrcexgnw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759939192.7880335-2358-24729568510886/AnsiballZ_edpm_nftables_from_files.py'
Oct 08 15:59:53 compute-0 sudo[79530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:53 compute-0 python3[79532]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 08 15:59:53 compute-0 sudo[79530]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:53 compute-0 sudo[79682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auvfxvdvaxwtwbarknfdpnnepyfutlbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939193.5680072-2374-233916769171147/AnsiballZ_stat.py'
Oct 08 15:59:53 compute-0 sudo[79682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:54 compute-0 python3.9[79684]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:54 compute-0 sudo[79682]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:54 compute-0 sudo[79760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmpdelmuxdnbzauhnpjfcohqrgsiyzri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939193.5680072-2374-233916769171147/AnsiballZ_file.py'
Oct 08 15:59:54 compute-0 sudo[79760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:54 compute-0 python3.9[79762]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:54 compute-0 sudo[79760]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:55 compute-0 sudo[79912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfkphxyjcakhzxdodkegljdplwpardih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939194.8846126-2398-89466199800493/AnsiballZ_stat.py'
Oct 08 15:59:55 compute-0 sudo[79912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:55 compute-0 python3.9[79914]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:55 compute-0 sudo[79912]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:55 compute-0 sudo[79990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vewfydlfiwqpohpnsvhonpvhrcpvlhdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939194.8846126-2398-89466199800493/AnsiballZ_file.py'
Oct 08 15:59:55 compute-0 sudo[79990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:55 compute-0 python3.9[79992]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:55 compute-0 sudo[79990]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:56 compute-0 sudo[80142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzpwevtiadtlgunumobcumdnqzjodkdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939196.1805398-2422-220340111880724/AnsiballZ_stat.py'
Oct 08 15:59:56 compute-0 sudo[80142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:56 compute-0 python3.9[80144]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:56 compute-0 sudo[80142]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:56 compute-0 sudo[80220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeehhusatterwhjsluopfiodupylwqgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939196.1805398-2422-220340111880724/AnsiballZ_file.py'
Oct 08 15:59:56 compute-0 sudo[80220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:57 compute-0 python3.9[80222]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:57 compute-0 sudo[80220]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:57 compute-0 sudo[80372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vttzwolnvlpkhwdgzgywtachrhjxyuzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939197.3867257-2446-98172029772098/AnsiballZ_stat.py'
Oct 08 15:59:57 compute-0 sudo[80372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:57 compute-0 python3.9[80374]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:57 compute-0 sudo[80372]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:58 compute-0 sudo[80450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmubfrbmgemxpomryduwoygyfcosrdbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939197.3867257-2446-98172029772098/AnsiballZ_file.py'
Oct 08 15:59:58 compute-0 sudo[80450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:58 compute-0 python3.9[80452]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:58 compute-0 sudo[80450]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:58 compute-0 sudo[80602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shqajvjxhxlzgjcxipryzlcpuvttatyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939198.5940554-2470-197471662028022/AnsiballZ_stat.py'
Oct 08 15:59:58 compute-0 sudo[80602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:59 compute-0 python3.9[80604]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 15:59:59 compute-0 sudo[80602]: pam_unix(sudo:session): session closed for user root
Oct 08 15:59:59 compute-0 sudo[80727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvvqrlmfrycikitnvqikaucdfbyzhubv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939198.5940554-2470-197471662028022/AnsiballZ_copy.py'
Oct 08 15:59:59 compute-0 sudo[80727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 15:59:59 compute-0 python3.9[80729]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759939198.5940554-2470-197471662028022/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 15:59:59 compute-0 sudo[80727]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:00 compute-0 systemd[1]: Starting system activity accounting tool...
Oct 08 16:00:00 compute-0 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct 08 16:00:00 compute-0 systemd[1]: Finished system activity accounting tool.
Oct 08 16:00:00 compute-0 sudo[80880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfdzdgbseqctcvnyitdsammdvghemsms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939200.08963-2500-261567710375420/AnsiballZ_file.py'
Oct 08 16:00:00 compute-0 sudo[80880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:00 compute-0 python3.9[80882]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:00:00 compute-0 sudo[80880]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:01 compute-0 sudo[81032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbwhvbmlwkzfudzhbjyhggxztgfxvcqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939200.860416-2516-236772685065639/AnsiballZ_command.py'
Oct 08 16:00:01 compute-0 sudo[81032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:01 compute-0 python3.9[81034]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 16:00:01 compute-0 sudo[81032]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:02 compute-0 sudo[81187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyxaewcufhjrntvxokentpzvygvfmtbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939201.6032436-2532-44025925808146/AnsiballZ_blockinfile.py'
Oct 08 16:00:02 compute-0 sudo[81187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:02 compute-0 python3.9[81189]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:00:02 compute-0 sudo[81187]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:02 compute-0 sudo[81339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdoavngjzjxlnvbktmfwjcxohfcmjjyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939202.5627947-2550-3787672520371/AnsiballZ_command.py'
Oct 08 16:00:02 compute-0 sudo[81339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:03 compute-0 python3.9[81341]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 16:00:03 compute-0 sudo[81339]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:03 compute-0 sudo[81492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nupcvjbsllemolrkamneuyiernmbtsmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939203.3351605-2566-213387432100313/AnsiballZ_stat.py'
Oct 08 16:00:03 compute-0 sudo[81492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:03 compute-0 python3.9[81494]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:00:03 compute-0 sudo[81492]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:04 compute-0 sudo[81646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwxevompiyrcehruscrcbmzkgyqxmlja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939203.9762533-2582-47132530053149/AnsiballZ_command.py'
Oct 08 16:00:04 compute-0 sudo[81646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:04 compute-0 python3.9[81648]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 16:00:04 compute-0 sudo[81646]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:05 compute-0 sudo[81801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsuxssiptvxkhusufzskfiocjfdaiwxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939204.6917527-2598-270260057648888/AnsiballZ_file.py'
Oct 08 16:00:05 compute-0 sudo[81801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:05 compute-0 python3.9[81803]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:00:05 compute-0 sudo[81801]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:05 compute-0 sudo[81953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okebqjenjkooaibgrsihbpeiptwqsnra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939205.4448302-2614-165127017637750/AnsiballZ_stat.py'
Oct 08 16:00:05 compute-0 sudo[81953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:05 compute-0 python3.9[81955]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:00:05 compute-0 sudo[81953]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:06 compute-0 sudo[82076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nutuvrdnojqtbkbimtkhxmihlbddnpmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939205.4448302-2614-165127017637750/AnsiballZ_copy.py'
Oct 08 16:00:06 compute-0 sudo[82076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:06 compute-0 python3.9[82078]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939205.4448302-2614-165127017637750/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:00:06 compute-0 sudo[82076]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:07 compute-0 sudo[82228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkbzuhbprkfcjwhnkyiytpnnkttuhedd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939206.7413445-2644-152587786926556/AnsiballZ_stat.py'
Oct 08 16:00:07 compute-0 sudo[82228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:07 compute-0 python3.9[82230]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:00:07 compute-0 sudo[82228]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:07 compute-0 sudo[82351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zutztsbwfsgyzzyotaacimmembzyzvvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939206.7413445-2644-152587786926556/AnsiballZ_copy.py'
Oct 08 16:00:07 compute-0 sudo[82351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:07 compute-0 python3.9[82353]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939206.7413445-2644-152587786926556/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:00:07 compute-0 sudo[82351]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:08 compute-0 sudo[82503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqmyzxspryegxkoxufdkjhlqhahoesbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939208.034277-2674-66321153774185/AnsiballZ_stat.py'
Oct 08 16:00:08 compute-0 sudo[82503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:08 compute-0 python3.9[82505]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:00:08 compute-0 sudo[82503]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:08 compute-0 sudo[82626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvwicwwikhilkwdvaangjplrqivnziim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939208.034277-2674-66321153774185/AnsiballZ_copy.py'
Oct 08 16:00:08 compute-0 sudo[82626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:09 compute-0 python3.9[82628]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939208.034277-2674-66321153774185/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:00:09 compute-0 sudo[82626]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:09 compute-0 sudo[82778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dquamdxsengkcbnlbcpyllaqsfoyvmyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939209.3989484-2704-103228413387275/AnsiballZ_systemd.py'
Oct 08 16:00:09 compute-0 sudo[82778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:10 compute-0 python3.9[82780]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:00:10 compute-0 systemd[1]: Reloading.
Oct 08 16:00:10 compute-0 systemd-rc-local-generator[82805]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:00:10 compute-0 systemd-sysv-generator[82809]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:00:10 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Oct 08 16:00:10 compute-0 sudo[82778]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:10 compute-0 sudo[82969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxamvobbzazsfvuoreexhxvycvwkxfjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939210.596917-2720-11862630926164/AnsiballZ_systemd.py'
Oct 08 16:00:10 compute-0 sudo[82969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:11 compute-0 python3.9[82971]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 08 16:00:11 compute-0 systemd[1]: Reloading.
Oct 08 16:00:11 compute-0 systemd-sysv-generator[83003]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:00:11 compute-0 systemd-rc-local-generator[83000]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:00:11 compute-0 systemd[1]: Reloading.
Oct 08 16:00:11 compute-0 systemd-sysv-generator[83039]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:00:11 compute-0 systemd-rc-local-generator[83036]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:00:11 compute-0 sudo[82969]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:12 compute-0 sshd-session[28785]: Connection closed by 192.168.122.30 port 53930
Oct 08 16:00:12 compute-0 sshd-session[28782]: pam_unix(sshd:session): session closed for user zuul
Oct 08 16:00:12 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Oct 08 16:00:12 compute-0 systemd[1]: session-7.scope: Consumed 3min 40.083s CPU time.
Oct 08 16:00:12 compute-0 systemd-logind[847]: Session 7 logged out. Waiting for processes to exit.
Oct 08 16:00:12 compute-0 systemd-logind[847]: Removed session 7.
Oct 08 16:00:13 compute-0 podman[83068]: 2025-10-08 16:00:13.469048344 +0000 UTC m=+0.067365938 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 08 16:00:18 compute-0 sshd-session[83088]: Accepted publickey for zuul from 192.168.122.30 port 60736 ssh2: ECDSA SHA256:ZIjHNHNxAuv0z7dTwV8SzPT4xe1+IFvqH/0VmHWdIl4
Oct 08 16:00:18 compute-0 systemd-logind[847]: New session 8 of user zuul.
Oct 08 16:00:18 compute-0 systemd[1]: Started Session 8 of User zuul.
Oct 08 16:00:18 compute-0 sshd-session[83088]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 16:00:18 compute-0 podman[83090]: 2025-10-08 16:00:18.512336388 +0000 UTC m=+0.115231421 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest)
Oct 08 16:00:19 compute-0 python3.9[83267]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 16:00:20 compute-0 sudo[83421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxmxkglccpdxytcccsvwzbhnlokndwqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939220.0605617-48-224863543979741/AnsiballZ_file.py'
Oct 08 16:00:20 compute-0 sudo[83421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:20 compute-0 python3.9[83423]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:00:20 compute-0 sudo[83421]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:21 compute-0 sudo[83573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lchhncltjqvnfjkafcttungbpyihfdeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939220.9718235-48-45138084478251/AnsiballZ_file.py'
Oct 08 16:00:21 compute-0 sudo[83573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:21 compute-0 python3.9[83575]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:00:21 compute-0 sudo[83573]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:21 compute-0 sudo[83725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iphbouvkscrvetdurclqgeztxoozajzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939221.6673622-48-149722032975460/AnsiballZ_file.py'
Oct 08 16:00:21 compute-0 sudo[83725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:22 compute-0 python3.9[83727]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:00:22 compute-0 sudo[83725]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:22 compute-0 sudo[83877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tywmvwceqsqjljdyxdsyxdkxlonxncnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939222.3683205-48-46396844529549/AnsiballZ_file.py'
Oct 08 16:00:22 compute-0 sudo[83877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:22 compute-0 python3.9[83879]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 08 16:00:22 compute-0 sudo[83877]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:23 compute-0 sudo[84029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhqixfgnomlcdautkmyrofahgzsdwxdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939223.0174-48-74578448511599/AnsiballZ_file.py'
Oct 08 16:00:23 compute-0 sudo[84029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:23 compute-0 python3.9[84031]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:00:23 compute-0 sudo[84029]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:24 compute-0 sudo[84181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlsxkgycbrrzbhvbkudibjbpaiqjxesp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939223.6968253-120-251781287166164/AnsiballZ_stat.py'
Oct 08 16:00:24 compute-0 sudo[84181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:24 compute-0 python3.9[84183]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:00:24 compute-0 sudo[84181]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:25 compute-0 sudo[84335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqodsddcqapvovjssvuylbvwbkrmyrdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939224.5437558-136-104255802771983/AnsiballZ_systemd.py'
Oct 08 16:00:25 compute-0 sudo[84335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:25 compute-0 python3.9[84337]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:00:25 compute-0 systemd[1]: Reloading.
Oct 08 16:00:25 compute-0 systemd-sysv-generator[84367]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:00:25 compute-0 systemd-rc-local-generator[84364]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:00:25 compute-0 sudo[84335]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:26 compute-0 sudo[84523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcvosnqazxqezpqzwmxznrhlmunojcsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939226.0508518-152-187084734573647/AnsiballZ_service_facts.py'
Oct 08 16:00:26 compute-0 sudo[84523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:26 compute-0 python3.9[84525]: ansible-ansible.builtin.service_facts Invoked
Oct 08 16:00:26 compute-0 network[84542]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 08 16:00:26 compute-0 network[84543]: 'network-scripts' will be removed from distribution in near future.
Oct 08 16:00:26 compute-0 network[84544]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 08 16:00:30 compute-0 sudo[84523]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:31 compute-0 sudo[84815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwcgouqmmdgpkdrgmllyqjuhdhcciuvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939230.852806-168-238668785974034/AnsiballZ_systemd.py'
Oct 08 16:00:31 compute-0 sudo[84815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:31 compute-0 python3.9[84817]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:00:31 compute-0 systemd[1]: Reloading.
Oct 08 16:00:31 compute-0 systemd-rc-local-generator[84845]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:00:31 compute-0 systemd-sysv-generator[84849]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:00:32 compute-0 sudo[84815]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:32 compute-0 python3.9[85003]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:00:33 compute-0 sudo[85153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hebtzpvyfnaafpfnhttcifagsnpkkfli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939232.9375522-202-107030553889977/AnsiballZ_podman_container.py'
Oct 08 16:00:33 compute-0 sudo[85153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:33 compute-0 python3.9[85155]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 08 16:00:33 compute-0 podman[85191]: 2025-10-08 16:00:33.925452348 +0000 UTC m=+0.049501432 container create 3258e2b49febb662ed4b81cdab64a0110a31b67141ec7f3e9ed9533bfa367003 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid_config, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, io.buildah.version=1.41.4)
Oct 08 16:00:33 compute-0 rsyslogd[1296]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 16:00:33 compute-0 rsyslogd[1296]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 16:00:33 compute-0 NetworkManager[1034]: <info>  [1759939233.9743] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/20)
Oct 08 16:00:33 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 08 16:00:33 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 08 16:00:33 compute-0 kernel: veth0: entered allmulticast mode
Oct 08 16:00:33 compute-0 kernel: veth0: entered promiscuous mode
Oct 08 16:00:33 compute-0 kernel: podman0: port 1(veth0) entered blocking state
Oct 08 16:00:33 compute-0 kernel: podman0: port 1(veth0) entered forwarding state
Oct 08 16:00:33 compute-0 NetworkManager[1034]: <info>  [1759939233.9948] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Oct 08 16:00:33 compute-0 NetworkManager[1034]: <info>  [1759939233.9974] device (veth0): carrier: link connected
Oct 08 16:00:33 compute-0 NetworkManager[1034]: <info>  [1759939233.9981] device (podman0): carrier: link connected
Oct 08 16:00:33 compute-0 podman[85191]: 2025-10-08 16:00:33.902766482 +0000 UTC m=+0.026815576 image pull d616130b511b8ac89010d6032e64392b92f4b2139a68678af8c27be64522adb9 38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest
Oct 08 16:00:34 compute-0 systemd-udevd[85218]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:00:34 compute-0 systemd-udevd[85222]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:00:34 compute-0 NetworkManager[1034]: <info>  [1759939234.0293] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:00:34 compute-0 NetworkManager[1034]: <info>  [1759939234.0301] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:00:34 compute-0 NetworkManager[1034]: <info>  [1759939234.0310] device (podman0): Activation: starting connection 'podman0' (c4221d01-1b61-41ce-9383-066daeae04f3)
Oct 08 16:00:34 compute-0 NetworkManager[1034]: <info>  [1759939234.0312] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 08 16:00:34 compute-0 NetworkManager[1034]: <info>  [1759939234.0315] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 08 16:00:34 compute-0 NetworkManager[1034]: <info>  [1759939234.0317] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 08 16:00:34 compute-0 NetworkManager[1034]: <info>  [1759939234.0319] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 08 16:00:34 compute-0 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 08 16:00:34 compute-0 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct 08 16:00:34 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 08 16:00:34 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 08 16:00:34 compute-0 NetworkManager[1034]: <info>  [1759939234.0762] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 08 16:00:34 compute-0 NetworkManager[1034]: <info>  [1759939234.0764] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 08 16:00:34 compute-0 NetworkManager[1034]: <info>  [1759939234.0772] device (podman0): Activation: successful, device activated.
Oct 08 16:00:34 compute-0 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 08 16:00:34 compute-0 systemd[1]: Started libpod-conmon-3258e2b49febb662ed4b81cdab64a0110a31b67141ec7f3e9ed9533bfa367003.scope.
Oct 08 16:00:34 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:00:34 compute-0 podman[85191]: 2025-10-08 16:00:34.456139952 +0000 UTC m=+0.580189056 container init 3258e2b49febb662ed4b81cdab64a0110a31b67141ec7f3e9ed9533bfa367003 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid_config, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 08 16:00:34 compute-0 podman[85191]: 2025-10-08 16:00:34.464986208 +0000 UTC m=+0.589035282 container start 3258e2b49febb662ed4b81cdab64a0110a31b67141ec7f3e9ed9533bfa367003 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid_config, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest)
Oct 08 16:00:34 compute-0 podman[85191]: 2025-10-08 16:00:34.468347645 +0000 UTC m=+0.592396719 container attach 3258e2b49febb662ed4b81cdab64a0110a31b67141ec7f3e9ed9533bfa367003 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:00:34 compute-0 iscsid_config[85354]: iqn.1994-05.com.redhat:a21f294ea281
Oct 08 16:00:34 compute-0 systemd[1]: libpod-3258e2b49febb662ed4b81cdab64a0110a31b67141ec7f3e9ed9533bfa367003.scope: Deactivated successfully.
Oct 08 16:00:34 compute-0 conmon[85354]: conmon 3258e2b49febb662ed4b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3258e2b49febb662ed4b81cdab64a0110a31b67141ec7f3e9ed9533bfa367003.scope/container/memory.events
Oct 08 16:00:34 compute-0 podman[85191]: 2025-10-08 16:00:34.474202044 +0000 UTC m=+0.598251118 container died 3258e2b49febb662ed4b81cdab64a0110a31b67141ec7f3e9ed9533bfa367003 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid_config, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Oct 08 16:00:34 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 08 16:00:34 compute-0 kernel: veth0 (unregistering): left allmulticast mode
Oct 08 16:00:34 compute-0 kernel: veth0 (unregistering): left promiscuous mode
Oct 08 16:00:34 compute-0 kernel: podman0: port 1(veth0) entered disabled state
Oct 08 16:00:34 compute-0 NetworkManager[1034]: <info>  [1759939234.5325] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:00:34 compute-0 systemd[1]: run-netns-netns\x2d525c6f61\x2dd524\x2d5d67\x2dc216\x2d5cac762c8b11.mount: Deactivated successfully.
Oct 08 16:00:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3258e2b49febb662ed4b81cdab64a0110a31b67141ec7f3e9ed9533bfa367003-userdata-shm.mount: Deactivated successfully.
Oct 08 16:00:34 compute-0 podman[85191]: 2025-10-08 16:00:34.887748373 +0000 UTC m=+1.011797447 container remove 3258e2b49febb662ed4b81cdab64a0110a31b67141ec7f3e9ed9533bfa367003 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid_config, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 08 16:00:34 compute-0 systemd[1]: libpod-conmon-3258e2b49febb662ed4b81cdab64a0110a31b67141ec7f3e9ed9533bfa367003.scope: Deactivated successfully.
Oct 08 16:00:34 compute-0 python3.9[85155]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True 38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest /usr/sbin/iscsi-iname
Oct 08 16:00:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a21c0e38551fd74a3b7be346285f3b9d0b80d27440614ec3b979a8d235343c4-merged.mount: Deactivated successfully.
Oct 08 16:00:35 compute-0 python3.9[85155]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: 
                                            DEPRECATED command:
                                            It is recommended to use Quadlets for running containers and pods under systemd.
                                            
                                            Please refer to podman-systemd.unit(5) for details.
                                            Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 08 16:00:35 compute-0 sudo[85153]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:35 compute-0 sudo[85594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkrokrdzegzebhccawxbdsghtoqmgxxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939235.200423-218-34308068888597/AnsiballZ_stat.py'
Oct 08 16:00:35 compute-0 sudo[85594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:35 compute-0 python3.9[85596]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:00:35 compute-0 sudo[85594]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:36 compute-0 sudo[85717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gudzdnzpyyduqyvenfibhmtkvhfhadyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939235.200423-218-34308068888597/AnsiballZ_copy.py'
Oct 08 16:00:36 compute-0 sudo[85717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:36 compute-0 python3.9[85719]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939235.200423-218-34308068888597/.source.iscsi _original_basename=.bdyp30mk follow=False checksum=80a2f02795859f28b6f45913cd40884ce98d3b72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:00:36 compute-0 sudo[85717]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:37 compute-0 sudo[85869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uncmtxvlmkfxwsdfynlzlsrrrdvvhgjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939236.7663202-248-266157800295914/AnsiballZ_file.py'
Oct 08 16:00:37 compute-0 sudo[85869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:37 compute-0 python3.9[85871]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:00:37 compute-0 sudo[85869]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:37 compute-0 python3.9[86021]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:00:38 compute-0 sudo[86173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxmhlxksuigbcgwidcygamyakshrgegs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939238.1897452-282-165482603054805/AnsiballZ_lineinfile.py'
Oct 08 16:00:38 compute-0 sudo[86173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:38 compute-0 python3.9[86175]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:00:38 compute-0 sudo[86173]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:39 compute-0 sudo[86325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyybuurqdgqxljwaineeopmnoyzgomlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939239.2096076-300-194439136626736/AnsiballZ_file.py'
Oct 08 16:00:39 compute-0 sudo[86325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:39 compute-0 python3.9[86327]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:00:39 compute-0 sudo[86325]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:40 compute-0 sudo[86477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scpqohjmddynawacshdfhwlmuyrpeksd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939239.940452-316-74718928993818/AnsiballZ_stat.py'
Oct 08 16:00:40 compute-0 sudo[86477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:40 compute-0 python3.9[86479]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:00:40 compute-0 sudo[86477]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:40 compute-0 sudo[86555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aynutzsghzwqyxynslkgwyturddrznar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939239.940452-316-74718928993818/AnsiballZ_file.py'
Oct 08 16:00:40 compute-0 sudo[86555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:40 compute-0 python3.9[86557]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:00:40 compute-0 sudo[86555]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:41 compute-0 sudo[86707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbwedvpzegmelflpuzqztsraoikabnob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939241.1974123-316-45949799180179/AnsiballZ_stat.py'
Oct 08 16:00:41 compute-0 sudo[86707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:41 compute-0 python3.9[86709]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:00:41 compute-0 sudo[86707]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:00:41.853 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:00:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:00:41.854 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:00:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:00:41.854 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:00:41 compute-0 sudo[86786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdgmtsqbimgqyxenkrmroudfpjqybptu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939241.1974123-316-45949799180179/AnsiballZ_file.py'
Oct 08 16:00:41 compute-0 sudo[86786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:42 compute-0 python3.9[86788]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:00:42 compute-0 sudo[86786]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:42 compute-0 sudo[86938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uguidyjnvbcebltpbhjhaokrpgfypdje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939242.3730245-362-162312132102434/AnsiballZ_file.py'
Oct 08 16:00:42 compute-0 sudo[86938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:42 compute-0 python3.9[86940]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:00:42 compute-0 sudo[86938]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:43 compute-0 sudo[87090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ledfdkcnajweflbdckycyuzkuoaagxma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939243.1092708-378-153824206593266/AnsiballZ_stat.py'
Oct 08 16:00:43 compute-0 sudo[87090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:43 compute-0 python3.9[87092]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:00:43 compute-0 sudo[87090]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:43 compute-0 sudo[87178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-catgpnxoytopxxvtjscycbyzqrlgzqor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939243.1092708-378-153824206593266/AnsiballZ_file.py'
Oct 08 16:00:43 compute-0 sudo[87178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:43 compute-0 podman[87142]: 2025-10-08 16:00:43.955306464 +0000 UTC m=+0.078199979 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 08 16:00:44 compute-0 python3.9[87185]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:00:44 compute-0 sudo[87178]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:44 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 08 16:00:44 compute-0 sudo[87340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jssabeharvvdnvbmihtimjpzohenwmvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939244.3311553-402-83273016545899/AnsiballZ_stat.py'
Oct 08 16:00:44 compute-0 sudo[87340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:44 compute-0 python3.9[87342]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:00:45 compute-0 sudo[87340]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:45 compute-0 sudo[87418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkcnsiueprivxnipqrumorsribsxewqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939244.3311553-402-83273016545899/AnsiballZ_file.py'
Oct 08 16:00:45 compute-0 sudo[87418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:45 compute-0 python3.9[87420]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:00:45 compute-0 sudo[87418]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:45 compute-0 sudo[87570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsjbmffdzxmhsexapfvthxwahsnvsyqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939245.6176114-426-211846978663586/AnsiballZ_systemd.py'
Oct 08 16:00:45 compute-0 sudo[87570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:46 compute-0 python3.9[87572]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:00:46 compute-0 systemd[1]: Reloading.
Oct 08 16:00:46 compute-0 systemd-rc-local-generator[87600]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:00:46 compute-0 systemd-sysv-generator[87603]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:00:46 compute-0 sudo[87570]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:47 compute-0 sudo[87760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnzzqadzbdqwixehfianwuoirsdsnyjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939246.844457-442-253279368386118/AnsiballZ_stat.py'
Oct 08 16:00:47 compute-0 sudo[87760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:47 compute-0 python3.9[87762]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:00:47 compute-0 sudo[87760]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:47 compute-0 sudo[87838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnnqsedqqatgwzlojwpjohvkkujanwxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939246.844457-442-253279368386118/AnsiballZ_file.py'
Oct 08 16:00:47 compute-0 sudo[87838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:47 compute-0 python3.9[87840]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:00:47 compute-0 sudo[87838]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:48 compute-0 sudo[87990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvkrfbtuamquskwhkvvjddtxknmetpil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939248.077998-466-227033914337529/AnsiballZ_stat.py'
Oct 08 16:00:48 compute-0 sudo[87990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:48 compute-0 python3.9[87992]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:00:48 compute-0 sudo[87990]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:48 compute-0 sudo[88081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awnzdqgttdycwuqxbmcvbuuskhvhvmxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939248.077998-466-227033914337529/AnsiballZ_file.py'
Oct 08 16:00:48 compute-0 sudo[88081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:48 compute-0 podman[88042]: 2025-10-08 16:00:48.875975773 +0000 UTC m=+0.095797604 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 08 16:00:49 compute-0 python3.9[88089]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:00:49 compute-0 sudo[88081]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:49 compute-0 sudo[88246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtsxvwzxmfskugqvkgyudothfgjqcuvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939249.2325232-490-175571220680785/AnsiballZ_systemd.py'
Oct 08 16:00:49 compute-0 sudo[88246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:49 compute-0 python3.9[88248]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:00:49 compute-0 systemd[1]: Reloading.
Oct 08 16:00:50 compute-0 systemd-rc-local-generator[88267]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:00:50 compute-0 systemd-sysv-generator[88273]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:00:50 compute-0 systemd[1]: Starting Create netns directory...
Oct 08 16:00:50 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 08 16:00:50 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 08 16:00:50 compute-0 systemd[1]: Finished Create netns directory.
Oct 08 16:00:50 compute-0 sudo[88246]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:51 compute-0 sudo[88438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-putjabbpgstgudeupwipkpqcqlxaqwlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939250.6983976-510-133290651352607/AnsiballZ_file.py'
Oct 08 16:00:51 compute-0 sudo[88438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:51 compute-0 python3.9[88440]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:00:51 compute-0 sudo[88438]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:51 compute-0 sudo[88590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwejuejrwjiywxxrwnhmtsomyjidzsmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939251.456285-526-167540164757022/AnsiballZ_stat.py'
Oct 08 16:00:51 compute-0 sudo[88590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:52 compute-0 python3.9[88592]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:00:52 compute-0 sudo[88590]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:52 compute-0 sudo[88713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctvjeinrbjmfjcfremabzalvvoosxfol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939251.456285-526-167540164757022/AnsiballZ_copy.py'
Oct 08 16:00:52 compute-0 sudo[88713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:52 compute-0 python3.9[88715]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759939251.456285-526-167540164757022/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:00:52 compute-0 sudo[88713]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:53 compute-0 sudo[88865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yddbhjndpjbcveewymgvvwzkcycjyqmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939253.1699145-560-256184571682222/AnsiballZ_file.py'
Oct 08 16:00:53 compute-0 sudo[88865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:53 compute-0 python3.9[88867]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:00:53 compute-0 sudo[88865]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:54 compute-0 sudo[89017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktcqvdsjafrsbqyxnjuholskqgssrskl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939253.8702788-576-120879007406051/AnsiballZ_stat.py'
Oct 08 16:00:54 compute-0 sudo[89017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:54 compute-0 python3.9[89019]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:00:54 compute-0 sudo[89017]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:54 compute-0 sudo[89140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnzbnixbihjxgvvniejkrkadmpasmbpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939253.8702788-576-120879007406051/AnsiballZ_copy.py'
Oct 08 16:00:54 compute-0 sudo[89140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:54 compute-0 python3.9[89142]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939253.8702788-576-120879007406051/.source.json _original_basename=.mrjkkw2e follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:00:54 compute-0 sudo[89140]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:55 compute-0 sudo[89292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxgxreroihunbcddlckgvonkypnuklxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939255.0980082-606-163541330239454/AnsiballZ_file.py'
Oct 08 16:00:55 compute-0 sudo[89292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:55 compute-0 python3.9[89294]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:00:55 compute-0 sudo[89292]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:56 compute-0 sudo[89444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbdyyvtapxvigevfexbknavpmitzingc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939255.8678496-622-273327519612902/AnsiballZ_stat.py'
Oct 08 16:00:56 compute-0 sudo[89444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:56 compute-0 sudo[89444]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:56 compute-0 sudo[89567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtgldpqacgwqoagmmhhucymdflviruij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939255.8678496-622-273327519612902/AnsiballZ_copy.py'
Oct 08 16:00:56 compute-0 sudo[89567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:57 compute-0 sudo[89567]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:57 compute-0 sudo[89719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmribjkqqsebcoitxyyxjzjdgnswbfax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939257.4077582-656-53479903733872/AnsiballZ_container_config_data.py'
Oct 08 16:00:57 compute-0 sudo[89719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:58 compute-0 python3.9[89721]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 08 16:00:58 compute-0 sudo[89719]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:58 compute-0 sudo[89871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luofwxdheumkiguikqyjkrvjkllspyvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939258.3474753-674-251207709761342/AnsiballZ_container_config_hash.py'
Oct 08 16:00:58 compute-0 sudo[89871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:59 compute-0 python3.9[89873]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 08 16:00:59 compute-0 sudo[89871]: pam_unix(sudo:session): session closed for user root
Oct 08 16:00:59 compute-0 sudo[90023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orcojzbsubfsfewpknbqpzhqgtysvkyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939259.3213806-692-192360952570026/AnsiballZ_podman_container_info.py'
Oct 08 16:00:59 compute-0 sudo[90023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:00:59 compute-0 python3.9[90025]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 08 16:01:00 compute-0 sudo[90023]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:01 compute-0 CROND[90077]: (root) CMD (run-parts /etc/cron.hourly)
Oct 08 16:01:01 compute-0 run-parts[90080]: (/etc/cron.hourly) starting 0anacron
Oct 08 16:01:01 compute-0 anacron[90088]: Anacron started on 2025-10-08
Oct 08 16:01:01 compute-0 anacron[90088]: Will run job `cron.daily' in 23 min.
Oct 08 16:01:01 compute-0 anacron[90088]: Will run job `cron.weekly' in 43 min.
Oct 08 16:01:01 compute-0 anacron[90088]: Will run job `cron.monthly' in 63 min.
Oct 08 16:01:01 compute-0 anacron[90088]: Jobs will be executed sequentially
Oct 08 16:01:01 compute-0 run-parts[90090]: (/etc/cron.hourly) finished 0anacron
Oct 08 16:01:01 compute-0 CROND[90076]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 08 16:01:02 compute-0 sudo[90216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtlglphtdxtwuzwlvodyfngbjfzyhefo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759939261.7677622-718-15541646176112/AnsiballZ_edpm_container_manage.py'
Oct 08 16:01:02 compute-0 sudo[90216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:02 compute-0 python3[90218]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 08 16:01:02 compute-0 podman[90252]: 2025-10-08 16:01:02.721200792 +0000 UTC m=+0.049472843 container create 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, container_name=iscsid, org.label-schema.build-date=20251007, managed_by=edpm_ansible, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 08 16:01:02 compute-0 podman[90252]: 2025-10-08 16:01:02.693825419 +0000 UTC m=+0.022097480 image pull d616130b511b8ac89010d6032e64392b92f4b2139a68678af8c27be64522adb9 38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest
Oct 08 16:01:02 compute-0 python3[90218]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z 38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest
Oct 08 16:01:02 compute-0 sudo[90216]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:03 compute-0 sudo[90440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gendytqgzooxmxnlutafwnjjevmvfbza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939263.0507233-734-274460716709460/AnsiballZ_stat.py'
Oct 08 16:01:03 compute-0 sudo[90440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:03 compute-0 python3.9[90442]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:01:03 compute-0 sudo[90440]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:04 compute-0 sudo[90594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-netwbswiksqukdynbmmdmdnrdczerfir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939263.8545713-752-105142364763226/AnsiballZ_file.py'
Oct 08 16:01:04 compute-0 sudo[90594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:04 compute-0 python3.9[90596]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:04 compute-0 sudo[90594]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:04 compute-0 sudo[90670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fltkiutcyglabjgbsyyeldraajsgctpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939263.8545713-752-105142364763226/AnsiballZ_stat.py'
Oct 08 16:01:04 compute-0 sudo[90670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:04 compute-0 python3.9[90672]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:01:04 compute-0 sudo[90670]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:05 compute-0 sudo[90821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyvwumixtkytdeimfqfmnlznwpdvglqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939264.9241905-752-114499143465331/AnsiballZ_copy.py'
Oct 08 16:01:05 compute-0 sudo[90821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:05 compute-0 python3.9[90823]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759939264.9241905-752-114499143465331/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:05 compute-0 sudo[90821]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:05 compute-0 sudo[90897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egycfboasohtktakjsbqjviefkazicoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939264.9241905-752-114499143465331/AnsiballZ_systemd.py'
Oct 08 16:01:05 compute-0 sudo[90897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:06 compute-0 python3.9[90899]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 16:01:06 compute-0 systemd[1]: Reloading.
Oct 08 16:01:06 compute-0 systemd-rc-local-generator[90926]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:01:06 compute-0 systemd-sysv-generator[90930]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:01:06 compute-0 sudo[90897]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:06 compute-0 sudo[91007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvsxmtkemjvgxfdjbmzsbgwpsxueyquu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939264.9241905-752-114499143465331/AnsiballZ_systemd.py'
Oct 08 16:01:06 compute-0 sudo[91007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:07 compute-0 python3.9[91009]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:01:07 compute-0 systemd[1]: Reloading.
Oct 08 16:01:07 compute-0 systemd-rc-local-generator[91039]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:01:07 compute-0 systemd-sysv-generator[91042]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:01:07 compute-0 systemd[1]: Starting iscsid container...
Oct 08 16:01:07 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:01:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd4ce61f79ccaceb8e129de79bd75d121546b46e377b0fc433451864fcb3e9d/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 08 16:01:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd4ce61f79ccaceb8e129de79bd75d121546b46e377b0fc433451864fcb3e9d/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 08 16:01:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd4ce61f79ccaceb8e129de79bd75d121546b46e377b0fc433451864fcb3e9d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 08 16:01:07 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0.
Oct 08 16:01:07 compute-0 podman[91049]: 2025-10-08 16:01:07.72131562 +0000 UTC m=+0.134174175 container init 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid)
Oct 08 16:01:07 compute-0 iscsid[91064]: + sudo -E kolla_set_configs
Oct 08 16:01:07 compute-0 podman[91049]: 2025-10-08 16:01:07.752752715 +0000 UTC m=+0.165611180 container start 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Oct 08 16:01:07 compute-0 podman[91049]: iscsid
Oct 08 16:01:07 compute-0 sudo[91070]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 08 16:01:07 compute-0 systemd[1]: Started iscsid container.
Oct 08 16:01:07 compute-0 systemd[1]: Created slice User Slice of UID 0.
Oct 08 16:01:07 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 08 16:01:07 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 08 16:01:07 compute-0 sudo[91007]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:07 compute-0 systemd[1]: Starting User Manager for UID 0...
Oct 08 16:01:07 compute-0 systemd[91085]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 08 16:01:07 compute-0 podman[91071]: 2025-10-08 16:01:07.848923507 +0000 UTC m=+0.083599795 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest)
Oct 08 16:01:07 compute-0 systemd[1]: 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0-3531424249905869.service: Main process exited, code=exited, status=1/FAILURE
Oct 08 16:01:07 compute-0 systemd[1]: 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0-3531424249905869.service: Failed with result 'exit-code'.
Oct 08 16:01:07 compute-0 systemd[91085]: Queued start job for default target Main User Target.
Oct 08 16:01:07 compute-0 systemd[91085]: Created slice User Application Slice.
Oct 08 16:01:07 compute-0 systemd[91085]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 08 16:01:07 compute-0 systemd[91085]: Started Daily Cleanup of User's Temporary Directories.
Oct 08 16:01:07 compute-0 systemd[91085]: Reached target Paths.
Oct 08 16:01:07 compute-0 systemd[91085]: Reached target Timers.
Oct 08 16:01:07 compute-0 systemd[91085]: Starting D-Bus User Message Bus Socket...
Oct 08 16:01:07 compute-0 systemd[91085]: Starting Create User's Volatile Files and Directories...
Oct 08 16:01:07 compute-0 systemd[91085]: Listening on D-Bus User Message Bus Socket.
Oct 08 16:01:07 compute-0 systemd[91085]: Finished Create User's Volatile Files and Directories.
Oct 08 16:01:07 compute-0 systemd[91085]: Reached target Sockets.
Oct 08 16:01:07 compute-0 systemd[91085]: Reached target Basic System.
Oct 08 16:01:07 compute-0 systemd[91085]: Reached target Main User Target.
Oct 08 16:01:07 compute-0 systemd[91085]: Startup finished in 132ms.
Oct 08 16:01:07 compute-0 systemd[1]: Started User Manager for UID 0.
Oct 08 16:01:07 compute-0 systemd[1]: Started Session c3 of User root.
Oct 08 16:01:07 compute-0 sudo[91070]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 08 16:01:08 compute-0 iscsid[91064]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 08 16:01:08 compute-0 iscsid[91064]: INFO:__main__:Validating config file
Oct 08 16:01:08 compute-0 iscsid[91064]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 08 16:01:08 compute-0 iscsid[91064]: INFO:__main__:Writing out command to execute
Oct 08 16:01:08 compute-0 sudo[91070]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:08 compute-0 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 08 16:01:08 compute-0 iscsid[91064]: ++ cat /run_command
Oct 08 16:01:08 compute-0 iscsid[91064]: + CMD='/usr/sbin/iscsid -f'
Oct 08 16:01:08 compute-0 iscsid[91064]: + ARGS=
Oct 08 16:01:08 compute-0 iscsid[91064]: + sudo kolla_copy_cacerts
Oct 08 16:01:08 compute-0 sudo[91134]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 08 16:01:08 compute-0 systemd[1]: Started Session c4 of User root.
Oct 08 16:01:08 compute-0 sudo[91134]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 08 16:01:08 compute-0 sudo[91134]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:08 compute-0 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 08 16:01:08 compute-0 iscsid[91064]: + [[ ! -n '' ]]
Oct 08 16:01:08 compute-0 iscsid[91064]: + . kolla_extend_start
Oct 08 16:01:08 compute-0 iscsid[91064]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 08 16:01:08 compute-0 iscsid[91064]: Running command: '/usr/sbin/iscsid -f'
Oct 08 16:01:08 compute-0 iscsid[91064]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 08 16:01:08 compute-0 iscsid[91064]: + umask 0022
Oct 08 16:01:08 compute-0 iscsid[91064]: + exec /usr/sbin/iscsid -f
Oct 08 16:01:08 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Oct 08 16:01:08 compute-0 python3.9[91270]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:01:09 compute-0 sudo[91420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juuzdvqfrxiebczqovcddxjiiplkqoye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939268.8388715-826-21168435716422/AnsiballZ_file.py'
Oct 08 16:01:09 compute-0 sudo[91420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:09 compute-0 python3.9[91422]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:09 compute-0 sudo[91420]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:10 compute-0 sudo[91572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjekprbhqrgndwwwdhryndqrqcajrmmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939269.7457845-848-128054561855540/AnsiballZ_service_facts.py'
Oct 08 16:01:10 compute-0 sudo[91572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:10 compute-0 python3.9[91574]: ansible-ansible.builtin.service_facts Invoked
Oct 08 16:01:10 compute-0 network[91591]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 08 16:01:10 compute-0 network[91592]: 'network-scripts' will be removed from distribution in near future.
Oct 08 16:01:10 compute-0 network[91593]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 08 16:01:13 compute-0 sudo[91572]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:14 compute-0 podman[91740]: 2025-10-08 16:01:14.463420803 +0000 UTC m=+0.065837545 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent)
Oct 08 16:01:14 compute-0 sudo[91887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrfriznhmksgmvtuowwwnohwxqpzngde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939274.5735407-868-109211290834136/AnsiballZ_file.py'
Oct 08 16:01:14 compute-0 sudo[91887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:15 compute-0 python3.9[91889]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 08 16:01:15 compute-0 sudo[91887]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:15 compute-0 sudo[92039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdsdsmztkrjhbbynyooxncncohafxdak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939275.2721906-884-49387932960432/AnsiballZ_modprobe.py'
Oct 08 16:01:15 compute-0 sudo[92039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:15 compute-0 python3.9[92041]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 08 16:01:16 compute-0 sudo[92039]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:16 compute-0 sudo[92195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysklecicotbriadsmoecqaspmtqibsch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939276.2139359-900-170437534931544/AnsiballZ_stat.py'
Oct 08 16:01:16 compute-0 sudo[92195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:16 compute-0 python3.9[92197]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:01:16 compute-0 sudo[92195]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:17 compute-0 sudo[92318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfmcpdlkzbeyysebcksmgypjqenevpse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939276.2139359-900-170437534931544/AnsiballZ_copy.py'
Oct 08 16:01:17 compute-0 sudo[92318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:17 compute-0 python3.9[92320]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939276.2139359-900-170437534931544/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:17 compute-0 sudo[92318]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:17 compute-0 sudo[92470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqxglvcexlbqniuoucgueyxmwvfdcbsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939277.64432-932-101075684308704/AnsiballZ_lineinfile.py'
Oct 08 16:01:17 compute-0 sudo[92470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:18 compute-0 python3.9[92472]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:18 compute-0 sudo[92470]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:18 compute-0 systemd[1]: Stopping User Manager for UID 0...
Oct 08 16:01:18 compute-0 systemd[91085]: Activating special unit Exit the Session...
Oct 08 16:01:18 compute-0 systemd[91085]: Stopped target Main User Target.
Oct 08 16:01:18 compute-0 systemd[91085]: Stopped target Basic System.
Oct 08 16:01:18 compute-0 systemd[91085]: Stopped target Paths.
Oct 08 16:01:18 compute-0 systemd[91085]: Stopped target Sockets.
Oct 08 16:01:18 compute-0 systemd[91085]: Stopped target Timers.
Oct 08 16:01:18 compute-0 systemd[91085]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 08 16:01:18 compute-0 systemd[91085]: Closed D-Bus User Message Bus Socket.
Oct 08 16:01:18 compute-0 systemd[91085]: Stopped Create User's Volatile Files and Directories.
Oct 08 16:01:18 compute-0 systemd[91085]: Removed slice User Application Slice.
Oct 08 16:01:18 compute-0 systemd[91085]: Reached target Shutdown.
Oct 08 16:01:18 compute-0 systemd[91085]: Finished Exit the Session.
Oct 08 16:01:18 compute-0 systemd[91085]: Reached target Exit the Session.
Oct 08 16:01:18 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Oct 08 16:01:18 compute-0 systemd[1]: Stopped User Manager for UID 0.
Oct 08 16:01:18 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 08 16:01:18 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 08 16:01:18 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 08 16:01:18 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 08 16:01:18 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Oct 08 16:01:18 compute-0 sudo[92625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nywycatabexipriountctnhpkyieaxfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939278.398752-948-99271783506470/AnsiballZ_systemd.py'
Oct 08 16:01:18 compute-0 sudo[92625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:19 compute-0 python3.9[92627]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 16:01:19 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 08 16:01:19 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 08 16:01:19 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 08 16:01:19 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 08 16:01:19 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 08 16:01:19 compute-0 sudo[92625]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:19 compute-0 podman[92629]: 2025-10-08 16:01:19.219779045 +0000 UTC m=+0.130445580 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 08 16:01:19 compute-0 sudo[92807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xepwbnkogjdvyxwimknpaaawkhizsqnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939279.372891-964-149204703919139/AnsiballZ_file.py'
Oct 08 16:01:19 compute-0 sudo[92807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:19 compute-0 python3.9[92809]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:01:19 compute-0 sudo[92807]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:20 compute-0 sudo[92959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udbsiccploqjbbizvqdilyxzylnmzjrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939280.1237001-982-174644721583374/AnsiballZ_stat.py'
Oct 08 16:01:20 compute-0 sudo[92959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:20 compute-0 python3.9[92961]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:01:20 compute-0 sudo[92959]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:21 compute-0 sudo[93111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiuhkkrsvahmsyppscnozmzmnkkmlduu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939280.8288422-1000-37914065911801/AnsiballZ_stat.py'
Oct 08 16:01:21 compute-0 sudo[93111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:21 compute-0 python3.9[93113]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:01:21 compute-0 sudo[93111]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:21 compute-0 sudo[93263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkyvdyqbitvayvtyomqhpmngwwffnkmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939281.5627599-1016-182596605435415/AnsiballZ_stat.py'
Oct 08 16:01:21 compute-0 sudo[93263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:22 compute-0 python3.9[93265]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:01:22 compute-0 sudo[93263]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:22 compute-0 sudo[93386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwydoogrbxhbwzvydtoteklyqvoxskgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939281.5627599-1016-182596605435415/AnsiballZ_copy.py'
Oct 08 16:01:22 compute-0 sudo[93386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:22 compute-0 python3.9[93388]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939281.5627599-1016-182596605435415/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:22 compute-0 sudo[93386]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:23 compute-0 sudo[93538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itcnknhgthnomtluyyjfbizeflgnzsrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939282.8455212-1046-247786137759239/AnsiballZ_command.py'
Oct 08 16:01:23 compute-0 sudo[93538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:23 compute-0 python3.9[93540]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 16:01:23 compute-0 sudo[93538]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:24 compute-0 sudo[93691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiqyxwmkuckmzfjfaxzlmqzqamyoryda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939283.7304163-1062-279478161050167/AnsiballZ_lineinfile.py'
Oct 08 16:01:24 compute-0 sudo[93691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:24 compute-0 python3.9[93693]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:24 compute-0 sudo[93691]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:24 compute-0 sudo[93843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eymjfsfzbiqztjsrecoxmssgvnvfmchf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939284.4025674-1078-273917180520985/AnsiballZ_replace.py'
Oct 08 16:01:24 compute-0 sudo[93843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:25 compute-0 python3.9[93845]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:25 compute-0 sudo[93843]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:25 compute-0 sudo[93995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njetuprifwaqrixuannbvqhhkjhavwni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939285.3715007-1094-60537955763756/AnsiballZ_replace.py'
Oct 08 16:01:25 compute-0 sudo[93995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:25 compute-0 python3.9[93997]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:25 compute-0 sudo[93995]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:26 compute-0 sudo[94147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esnzeetlhqaissioryojpfwizhpnkxjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939286.0718517-1112-2485811597386/AnsiballZ_lineinfile.py'
Oct 08 16:01:26 compute-0 sudo[94147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:26 compute-0 python3.9[94149]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:26 compute-0 sudo[94147]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:26 compute-0 sudo[94299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swnwehyzmcmpmsqtwpevlturgjrjegvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939286.69462-1112-97100737977487/AnsiballZ_lineinfile.py'
Oct 08 16:01:26 compute-0 sudo[94299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:27 compute-0 python3.9[94301]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:27 compute-0 sudo[94299]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:27 compute-0 sudo[94451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqjsmzdqbovdgyklmvzewdzwgihmccjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939287.3789525-1112-115738133586762/AnsiballZ_lineinfile.py'
Oct 08 16:01:27 compute-0 sudo[94451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:27 compute-0 python3.9[94453]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:27 compute-0 sudo[94451]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:28 compute-0 sudo[94603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqcoaomcceajkpcyqmnmzblxogzftkmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939288.0288517-1112-27877787661383/AnsiballZ_lineinfile.py'
Oct 08 16:01:28 compute-0 sudo[94603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:28 compute-0 python3.9[94605]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:28 compute-0 sudo[94603]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:29 compute-0 sudo[94755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yikzcweaqdobjrpiaemhikohrpucvbgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939288.8347125-1170-1047883155257/AnsiballZ_stat.py'
Oct 08 16:01:29 compute-0 sudo[94755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:29 compute-0 python3.9[94757]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:01:29 compute-0 sudo[94755]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:30 compute-0 sudo[94909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwmduhouwinzwbnxqttdhmhigbdgmxbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939289.5975733-1186-130916886421429/AnsiballZ_file.py'
Oct 08 16:01:30 compute-0 sudo[94909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:30 compute-0 python3.9[94911]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:30 compute-0 sudo[94909]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:30 compute-0 sudo[95061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wngijjllynmfnvzbuhjzggcbchpfldtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939290.5493116-1204-81299060813512/AnsiballZ_file.py'
Oct 08 16:01:30 compute-0 sudo[95061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:31 compute-0 python3.9[95063]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:01:31 compute-0 sudo[95061]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:31 compute-0 sudo[95213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkhtwsgzoaicgnxvzqsmhfmjzbpytjee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939291.2219338-1220-134327587701466/AnsiballZ_stat.py'
Oct 08 16:01:31 compute-0 sudo[95213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:31 compute-0 python3.9[95215]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:01:31 compute-0 sudo[95213]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:31 compute-0 sudo[95291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwxhkowxuiffrbqosjcyfiepzzgwfyzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939291.2219338-1220-134327587701466/AnsiballZ_file.py'
Oct 08 16:01:31 compute-0 sudo[95291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:32 compute-0 python3.9[95293]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:01:32 compute-0 sudo[95291]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:32 compute-0 sudo[95443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtrchmwyinrjffwnifwueocellejjvht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939292.3382816-1220-232493359123779/AnsiballZ_stat.py'
Oct 08 16:01:32 compute-0 sudo[95443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:32 compute-0 python3.9[95445]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:01:32 compute-0 sudo[95443]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:33 compute-0 sudo[95521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjruvshftwvoocynqfbxvrpokhqqwmax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939292.3382816-1220-232493359123779/AnsiballZ_file.py'
Oct 08 16:01:33 compute-0 sudo[95521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:33 compute-0 python3.9[95523]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:01:33 compute-0 sudo[95521]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:33 compute-0 sudo[95673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lydusprmhmrnelvwysjkpuqhpxtgjtxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939293.566433-1266-224583912618921/AnsiballZ_file.py'
Oct 08 16:01:33 compute-0 sudo[95673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:34 compute-0 python3.9[95675]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:34 compute-0 sudo[95673]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:34 compute-0 sudo[95825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjmxuiwahzqbdhjagyslhcnkpdujhkxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939294.3297675-1282-173337371029138/AnsiballZ_stat.py'
Oct 08 16:01:34 compute-0 sudo[95825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:34 compute-0 python3.9[95827]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:01:34 compute-0 sudo[95825]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:35 compute-0 sudo[95903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcerprgciwiqhlbjpglejpbkbydyypuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939294.3297675-1282-173337371029138/AnsiballZ_file.py'
Oct 08 16:01:35 compute-0 sudo[95903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:35 compute-0 python3.9[95905]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:35 compute-0 sudo[95903]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:35 compute-0 sudo[96055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gazlnonenxqnkjncajkkxiwkddplbaxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939295.6089041-1306-154722167996750/AnsiballZ_stat.py'
Oct 08 16:01:35 compute-0 sudo[96055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:36 compute-0 python3.9[96057]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:01:36 compute-0 sudo[96055]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:36 compute-0 sudo[96133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmyctnwslupuewoompgauzdfiftonmqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939295.6089041-1306-154722167996750/AnsiballZ_file.py'
Oct 08 16:01:36 compute-0 sudo[96133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:36 compute-0 python3.9[96135]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:36 compute-0 sudo[96133]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:37 compute-0 sudo[96285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wenrsbadhkpyjwkwhfmmnkpwdqwsxnon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939296.7875392-1330-151007345466813/AnsiballZ_systemd.py'
Oct 08 16:01:37 compute-0 sudo[96285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:37 compute-0 python3.9[96287]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:01:37 compute-0 systemd[1]: Reloading.
Oct 08 16:01:37 compute-0 systemd-rc-local-generator[96314]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:01:37 compute-0 systemd-sysv-generator[96318]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:01:37 compute-0 sudo[96285]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:38 compute-0 sudo[96484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcszqrbqyzhadlnmkgxpszzmkkydteqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939298.002524-1346-166825286999454/AnsiballZ_stat.py'
Oct 08 16:01:38 compute-0 sudo[96484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:38 compute-0 podman[96448]: 2025-10-08 16:01:38.366918412 +0000 UTC m=+0.081345240 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_managed=true)
Oct 08 16:01:38 compute-0 python3.9[96487]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:01:38 compute-0 sudo[96484]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:38 compute-0 sudo[96570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfpbqjklhzevabgmudeysbawsbeayslu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939298.002524-1346-166825286999454/AnsiballZ_file.py'
Oct 08 16:01:38 compute-0 sudo[96570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:39 compute-0 python3.9[96572]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:39 compute-0 sudo[96570]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:39 compute-0 sudo[96722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdeuupxmjsceafqznlyxbmsyhjmzvcua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939299.2256253-1370-99887456996634/AnsiballZ_stat.py'
Oct 08 16:01:39 compute-0 sudo[96722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:39 compute-0 python3.9[96724]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:01:39 compute-0 sudo[96722]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:40 compute-0 sudo[96800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuehiofhbfigbipnjzkahpffejwxmmef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939299.2256253-1370-99887456996634/AnsiballZ_file.py'
Oct 08 16:01:40 compute-0 sudo[96800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:40 compute-0 python3.9[96802]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:40 compute-0 sudo[96800]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:40 compute-0 sudo[96952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjlblbydltqhsxecrznswaxvgpqzgbrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939300.4983466-1394-129584239610695/AnsiballZ_systemd.py'
Oct 08 16:01:40 compute-0 sudo[96952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:40 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 08 16:01:41 compute-0 python3.9[96954]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:01:41 compute-0 systemd[1]: Reloading.
Oct 08 16:01:41 compute-0 systemd-rc-local-generator[96981]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:01:41 compute-0 systemd-sysv-generator[96985]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:01:41 compute-0 systemd[1]: Starting Create netns directory...
Oct 08 16:01:41 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 08 16:01:41 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 08 16:01:41 compute-0 systemd[1]: Finished Create netns directory.
Oct 08 16:01:41 compute-0 sudo[96952]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:01:41.856 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:01:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:01:41.857 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:01:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:01:41.857 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:01:42 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 08 16:01:42 compute-0 sudo[97147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uegixvshgxpocwyawsybdebwgfffuusp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939301.9338107-1414-42133508734388/AnsiballZ_file.py'
Oct 08 16:01:42 compute-0 sudo[97147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:42 compute-0 python3.9[97149]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:01:42 compute-0 sudo[97147]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:43 compute-0 sudo[97299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiwiagaaxiqheesvpyumvlqzlivoqeac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939302.7400243-1430-2797747442545/AnsiballZ_stat.py'
Oct 08 16:01:43 compute-0 sudo[97299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:43 compute-0 python3.9[97301]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:01:43 compute-0 sudo[97299]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:43 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 08 16:01:43 compute-0 sudo[97423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nitphipwonzcmaeoltqpjnqspuwmtect ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939302.7400243-1430-2797747442545/AnsiballZ_copy.py'
Oct 08 16:01:43 compute-0 sudo[97423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:43 compute-0 python3.9[97425]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759939302.7400243-1430-2797747442545/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:01:43 compute-0 sudo[97423]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:44 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 08 16:01:44 compute-0 podman[97503]: 2025-10-08 16:01:44.664435974 +0000 UTC m=+0.093862428 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Oct 08 16:01:44 compute-0 sudo[97595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynucxpwtdyfqworbabmeyyskhalwvjvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939304.4592981-1464-34487266680670/AnsiballZ_file.py'
Oct 08 16:01:44 compute-0 sudo[97595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:44 compute-0 python3.9[97597]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:01:45 compute-0 sudo[97595]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:45 compute-0 sudo[97747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcjwxwqzjsoypqvidpqgkciczauctiut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939305.2658744-1480-151015063188943/AnsiballZ_stat.py'
Oct 08 16:01:45 compute-0 sudo[97747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:45 compute-0 python3.9[97749]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:01:45 compute-0 sudo[97747]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:46 compute-0 sudo[97870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftbkiajsjtqpqtekcsqgakwolxnuvxas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939305.2658744-1480-151015063188943/AnsiballZ_copy.py'
Oct 08 16:01:46 compute-0 sudo[97870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:46 compute-0 python3.9[97872]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939305.2658744-1480-151015063188943/.source.json _original_basename=.06mng6li follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:46 compute-0 sudo[97870]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:47 compute-0 sudo[98022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfekhobdolgmlijwlffzrgsiqwvmxipg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939306.7005575-1510-193754351273480/AnsiballZ_file.py'
Oct 08 16:01:47 compute-0 sudo[98022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:47 compute-0 python3.9[98024]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:47 compute-0 sudo[98022]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:47 compute-0 sudo[98174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmqyiglxijhkhbbauxrzytomqrjbkbni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939307.468425-1526-232722627298094/AnsiballZ_stat.py'
Oct 08 16:01:47 compute-0 sudo[98174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:47 compute-0 sudo[98174]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:48 compute-0 sudo[98297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tirisewqvdqixqtsunagimvyopnbrazj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939307.468425-1526-232722627298094/AnsiballZ_copy.py'
Oct 08 16:01:48 compute-0 sudo[98297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:48 compute-0 sudo[98297]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:49 compute-0 sudo[98449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taykjnliworukeojysiwttqsnbxyelcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939308.8808024-1560-195347769205096/AnsiballZ_container_config_data.py'
Oct 08 16:01:49 compute-0 sudo[98449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:49 compute-0 python3.9[98451]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 08 16:01:49 compute-0 sudo[98449]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:49 compute-0 podman[98452]: 2025-10-08 16:01:49.480292501 +0000 UTC m=+0.083915022 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 08 16:01:50 compute-0 sudo[98627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xepirzqlrgallhqfaqheybwqslwhyzbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939309.9183168-1578-274597708727415/AnsiballZ_container_config_hash.py'
Oct 08 16:01:50 compute-0 sudo[98627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:50 compute-0 python3.9[98629]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 08 16:01:50 compute-0 sudo[98627]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:51 compute-0 sudo[98779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmpefhrgyskayooxctentmvnknnyibpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939310.7723272-1596-22244041643369/AnsiballZ_podman_container_info.py'
Oct 08 16:01:51 compute-0 sudo[98779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:51 compute-0 python3.9[98781]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 08 16:01:51 compute-0 sudo[98779]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:52 compute-0 sudo[98958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulpyvgwoghbfolbsnroywvvadfaaltlp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759939312.1618233-1622-114440316102101/AnsiballZ_edpm_container_manage.py'
Oct 08 16:01:52 compute-0 sudo[98958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:52 compute-0 python3[98960]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 08 16:01:52 compute-0 podman[98998]: 2025-10-08 16:01:52.971033852 +0000 UTC m=+0.057652194 container create 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251007, managed_by=edpm_ansible, io.buildah.version=1.41.4, config_id=multipathd, tcib_managed=true)
Oct 08 16:01:52 compute-0 podman[98998]: 2025-10-08 16:01:52.938587297 +0000 UTC m=+0.025205689 image pull baa22bed2df6f2fe2e91e3d9e9b226e81196010fa44d36f9efffd332059a07ea 38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest
Oct 08 16:01:52 compute-0 python3[98960]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z 38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest
Oct 08 16:01:53 compute-0 sudo[98958]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:53 compute-0 sudo[99186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbvtbhsdciahqitxkfchgtsbhnafwmrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939313.28196-1638-56516817209755/AnsiballZ_stat.py'
Oct 08 16:01:53 compute-0 sudo[99186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:53 compute-0 python3.9[99188]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:01:53 compute-0 sudo[99186]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:54 compute-0 sudo[99340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzqeyyconkadapvlxhlfpvuskvwygpia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939314.0708325-1656-228498836542023/AnsiballZ_file.py'
Oct 08 16:01:54 compute-0 sudo[99340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:54 compute-0 python3.9[99342]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:54 compute-0 sudo[99340]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:54 compute-0 sudo[99416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdiushlxczrqypxrxefayvllezxwonac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939314.0708325-1656-228498836542023/AnsiballZ_stat.py'
Oct 08 16:01:54 compute-0 sudo[99416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:55 compute-0 python3.9[99418]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:01:55 compute-0 sudo[99416]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:55 compute-0 sudo[99567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahkfyticpvmsqqtmhssykylweiisdhke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939315.12836-1656-150382959123943/AnsiballZ_copy.py'
Oct 08 16:01:55 compute-0 sudo[99567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:55 compute-0 python3.9[99569]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759939315.12836-1656-150382959123943/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:01:55 compute-0 sudo[99567]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:56 compute-0 sudo[99643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxeuecwrtpejyedvtsfvrmlzsagvtxio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939315.12836-1656-150382959123943/AnsiballZ_systemd.py'
Oct 08 16:01:56 compute-0 sudo[99643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:56 compute-0 python3.9[99645]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 16:01:56 compute-0 systemd[1]: Reloading.
Oct 08 16:01:56 compute-0 systemd-sysv-generator[99676]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:01:56 compute-0 systemd-rc-local-generator[99671]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:01:56 compute-0 sudo[99643]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:56 compute-0 sudo[99754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxfmxcrssgnnqjecgwhnbncowpdyaele ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939315.12836-1656-150382959123943/AnsiballZ_systemd.py'
Oct 08 16:01:56 compute-0 sudo[99754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:57 compute-0 python3.9[99756]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:01:57 compute-0 systemd[1]: Reloading.
Oct 08 16:01:57 compute-0 systemd-rc-local-generator[99786]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:01:57 compute-0 systemd-sysv-generator[99790]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:01:57 compute-0 systemd[1]: Starting multipathd container...
Oct 08 16:01:57 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9751ee7a45790bfd9d178176045233efd6973205e431f7b31a03e3e95ba34f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 08 16:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9751ee7a45790bfd9d178176045233efd6973205e431f7b31a03e3e95ba34f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 08 16:01:57 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214.
Oct 08 16:01:57 compute-0 podman[99796]: 2025-10-08 16:01:57.800062778 +0000 UTC m=+0.121837393 container init 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:01:57 compute-0 multipathd[99812]: + sudo -E kolla_set_configs
Oct 08 16:01:57 compute-0 sudo[99818]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 08 16:01:57 compute-0 sudo[99818]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 08 16:01:57 compute-0 podman[99796]: 2025-10-08 16:01:57.83313202 +0000 UTC m=+0.154906595 container start 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, container_name=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 08 16:01:57 compute-0 podman[99796]: multipathd
Oct 08 16:01:57 compute-0 systemd[1]: Started multipathd container.
Oct 08 16:01:57 compute-0 multipathd[99812]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 08 16:01:57 compute-0 sudo[99754]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:57 compute-0 multipathd[99812]: INFO:__main__:Validating config file
Oct 08 16:01:57 compute-0 multipathd[99812]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 08 16:01:57 compute-0 multipathd[99812]: INFO:__main__:Writing out command to execute
Oct 08 16:01:57 compute-0 sudo[99818]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:57 compute-0 multipathd[99812]: ++ cat /run_command
Oct 08 16:01:57 compute-0 multipathd[99812]: + CMD='/usr/sbin/multipathd -d'
Oct 08 16:01:57 compute-0 multipathd[99812]: + ARGS=
Oct 08 16:01:57 compute-0 multipathd[99812]: + sudo kolla_copy_cacerts
Oct 08 16:01:57 compute-0 sudo[99840]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 08 16:01:57 compute-0 sudo[99840]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 08 16:01:57 compute-0 sudo[99840]: pam_unix(sudo:session): session closed for user root
Oct 08 16:01:57 compute-0 multipathd[99812]: Running command: '/usr/sbin/multipathd -d'
Oct 08 16:01:57 compute-0 multipathd[99812]: + [[ ! -n '' ]]
Oct 08 16:01:57 compute-0 multipathd[99812]: + . kolla_extend_start
Oct 08 16:01:57 compute-0 multipathd[99812]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 08 16:01:57 compute-0 multipathd[99812]: + umask 0022
Oct 08 16:01:57 compute-0 multipathd[99812]: + exec /usr/sbin/multipathd -d
Oct 08 16:01:57 compute-0 podman[99819]: 2025-10-08 16:01:57.940973633 +0000 UTC m=+0.091569460 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, config_id=multipathd, container_name=multipathd)
Oct 08 16:01:57 compute-0 systemd[1]: 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214-40ed4bc2fe3ca2e9.service: Main process exited, code=exited, status=1/FAILURE
Oct 08 16:01:57 compute-0 systemd[1]: 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214-40ed4bc2fe3ca2e9.service: Failed with result 'exit-code'.
Oct 08 16:01:57 compute-0 multipathd[99812]: 606.706320 | multipathd v0.9.9: start up
Oct 08 16:01:57 compute-0 multipathd[99812]: 606.720717 | reconfigure: setting up paths and maps
Oct 08 16:01:57 compute-0 multipathd[99812]: 606.722818 | _check_bindings_file: failed to read header from /etc/multipath/bindings
Oct 08 16:01:57 compute-0 multipathd[99812]: 606.724203 | updated bindings file /etc/multipath/bindings
Oct 08 16:01:58 compute-0 python3.9[99999]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:01:59 compute-0 sudo[100151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqzbcwwflmvkilqcvmpeyudbjszsqnvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939318.8619356-1728-149083218424388/AnsiballZ_command.py'
Oct 08 16:01:59 compute-0 sudo[100151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:01:59 compute-0 python3.9[100153]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 16:01:59 compute-0 sudo[100151]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:00 compute-0 sudo[100316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjeysxcuevtxssdfebtketsrganfjess ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939319.6925175-1744-40037427459995/AnsiballZ_systemd.py'
Oct 08 16:02:00 compute-0 sudo[100316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:00 compute-0 python3.9[100318]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 16:02:00 compute-0 systemd[1]: Stopping multipathd container...
Oct 08 16:02:00 compute-0 multipathd[99812]: 609.216448 | multipathd: shut down
Oct 08 16:02:00 compute-0 systemd[1]: libpod-02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214.scope: Deactivated successfully.
Oct 08 16:02:00 compute-0 podman[100322]: 2025-10-08 16:02:00.494408455 +0000 UTC m=+0.065160368 container died 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4)
Oct 08 16:02:00 compute-0 systemd[1]: 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214-40ed4bc2fe3ca2e9.timer: Deactivated successfully.
Oct 08 16:02:00 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214.
Oct 08 16:02:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214-userdata-shm.mount: Deactivated successfully.
Oct 08 16:02:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d9751ee7a45790bfd9d178176045233efd6973205e431f7b31a03e3e95ba34f-merged.mount: Deactivated successfully.
Oct 08 16:02:00 compute-0 podman[100322]: 2025-10-08 16:02:00.538536852 +0000 UTC m=+0.109288765 container cleanup 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, container_name=multipathd)
Oct 08 16:02:00 compute-0 podman[100322]: multipathd
Oct 08 16:02:00 compute-0 podman[100354]: multipathd
Oct 08 16:02:00 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 08 16:02:00 compute-0 systemd[1]: Stopped multipathd container.
Oct 08 16:02:00 compute-0 systemd[1]: Starting multipathd container...
Oct 08 16:02:00 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:02:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9751ee7a45790bfd9d178176045233efd6973205e431f7b31a03e3e95ba34f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 08 16:02:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9751ee7a45790bfd9d178176045233efd6973205e431f7b31a03e3e95ba34f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 08 16:02:00 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214.
Oct 08 16:02:00 compute-0 podman[100367]: 2025-10-08 16:02:00.796808042 +0000 UTC m=+0.137949042 container init 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 08 16:02:00 compute-0 multipathd[100383]: + sudo -E kolla_set_configs
Oct 08 16:02:00 compute-0 sudo[100389]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 08 16:02:00 compute-0 sudo[100389]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 08 16:02:00 compute-0 podman[100367]: 2025-10-08 16:02:00.837057599 +0000 UTC m=+0.178198539 container start 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 08 16:02:00 compute-0 podman[100367]: multipathd
Oct 08 16:02:00 compute-0 systemd[1]: Started multipathd container.
Oct 08 16:02:00 compute-0 sudo[100316]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:00 compute-0 multipathd[100383]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 08 16:02:00 compute-0 multipathd[100383]: INFO:__main__:Validating config file
Oct 08 16:02:00 compute-0 multipathd[100383]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 08 16:02:00 compute-0 multipathd[100383]: INFO:__main__:Writing out command to execute
Oct 08 16:02:00 compute-0 sudo[100389]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:00 compute-0 multipathd[100383]: ++ cat /run_command
Oct 08 16:02:00 compute-0 multipathd[100383]: + CMD='/usr/sbin/multipathd -d'
Oct 08 16:02:00 compute-0 multipathd[100383]: + ARGS=
Oct 08 16:02:00 compute-0 multipathd[100383]: + sudo kolla_copy_cacerts
Oct 08 16:02:00 compute-0 podman[100390]: 2025-10-08 16:02:00.923558994 +0000 UTC m=+0.069077830 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251007, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Oct 08 16:02:00 compute-0 systemd[1]: 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214-62f99491a9da15fb.service: Main process exited, code=exited, status=1/FAILURE
Oct 08 16:02:00 compute-0 systemd[1]: 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214-62f99491a9da15fb.service: Failed with result 'exit-code'.
Oct 08 16:02:00 compute-0 sudo[100413]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 08 16:02:00 compute-0 sudo[100413]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 08 16:02:00 compute-0 sudo[100413]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:00 compute-0 multipathd[100383]: + [[ ! -n '' ]]
Oct 08 16:02:00 compute-0 multipathd[100383]: + . kolla_extend_start
Oct 08 16:02:00 compute-0 multipathd[100383]: Running command: '/usr/sbin/multipathd -d'
Oct 08 16:02:00 compute-0 multipathd[100383]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 08 16:02:00 compute-0 multipathd[100383]: + umask 0022
Oct 08 16:02:00 compute-0 multipathd[100383]: + exec /usr/sbin/multipathd -d
Oct 08 16:02:00 compute-0 multipathd[100383]: 609.713200 | multipathd v0.9.9: start up
Oct 08 16:02:00 compute-0 multipathd[100383]: 609.721709 | reconfigure: setting up paths and maps
Oct 08 16:02:01 compute-0 sudo[100572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zibqdwfpnabwxyznrrkuxjldctwijtnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939321.0945473-1760-79404790672264/AnsiballZ_file.py'
Oct 08 16:02:01 compute-0 sudo[100572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:01 compute-0 python3.9[100574]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:02:01 compute-0 sudo[100572]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:02 compute-0 sudo[100724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owtnizscftlzwfppvaqfbkkcwibdirek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939321.998735-1784-196570693262035/AnsiballZ_file.py'
Oct 08 16:02:02 compute-0 sudo[100724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:02 compute-0 python3.9[100726]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 08 16:02:02 compute-0 sudo[100724]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:03 compute-0 sudo[100876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auctqhzgorikddgnspjhjcevuauikyhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939322.7179244-1800-105793465178478/AnsiballZ_modprobe.py'
Oct 08 16:02:03 compute-0 sudo[100876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:03 compute-0 python3.9[100878]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 08 16:02:03 compute-0 kernel: Key type psk registered
Oct 08 16:02:03 compute-0 sudo[100876]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:03 compute-0 sudo[101037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpdqddxnszufrjndpnnlioqjcnxgdzhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939323.503212-1816-244181765644394/AnsiballZ_stat.py'
Oct 08 16:02:03 compute-0 sudo[101037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:04 compute-0 python3.9[101039]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:02:04 compute-0 sudo[101037]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:04 compute-0 sudo[101160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trvislonzlvhjwgocetllyxsrizfmpoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939323.503212-1816-244181765644394/AnsiballZ_copy.py'
Oct 08 16:02:04 compute-0 sudo[101160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:04 compute-0 python3.9[101162]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939323.503212-1816-244181765644394/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:02:04 compute-0 sudo[101160]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:05 compute-0 sudo[101312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zosqdonrwicxxockneqkjjkcgkrwxebm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939325.1039035-1848-277046255714466/AnsiballZ_lineinfile.py'
Oct 08 16:02:05 compute-0 sudo[101312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:05 compute-0 python3.9[101314]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:02:05 compute-0 sudo[101312]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:06 compute-0 sudo[101464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxlksudykgqtjmitprgyoydfkqfndzkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939325.8260775-1864-164396603440458/AnsiballZ_systemd.py'
Oct 08 16:02:06 compute-0 sudo[101464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:06 compute-0 python3.9[101466]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 16:02:06 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 08 16:02:06 compute-0 systemd[1]: Stopped Load Kernel Modules.
Oct 08 16:02:06 compute-0 systemd[1]: Stopping Load Kernel Modules...
Oct 08 16:02:06 compute-0 systemd[1]: Starting Load Kernel Modules...
Oct 08 16:02:06 compute-0 systemd[1]: Finished Load Kernel Modules.
Oct 08 16:02:06 compute-0 sudo[101464]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:07 compute-0 sudo[101620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmwsvdejcnvksqnisshzzoirxbxzcjzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939326.8667555-1880-225462523955690/AnsiballZ_setup.py'
Oct 08 16:02:07 compute-0 sudo[101620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:07 compute-0 python3.9[101622]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 08 16:02:07 compute-0 sudo[101620]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:08 compute-0 sudo[101704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsbacqnjawiibyqmfpfbpjcxrxbxjhmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939326.8667555-1880-225462523955690/AnsiballZ_dnf.py'
Oct 08 16:02:08 compute-0 sudo[101704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:08 compute-0 python3.9[101706]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 08 16:02:09 compute-0 podman[101708]: 2025-10-08 16:02:09.491676957 +0000 UTC m=+0.092146387 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 08 16:02:14 compute-0 podman[101731]: 2025-10-08 16:02:14.818741045 +0000 UTC m=+0.070015206 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 08 16:02:15 compute-0 systemd[1]: Reloading.
Oct 08 16:02:15 compute-0 systemd-rc-local-generator[101778]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:02:15 compute-0 systemd-sysv-generator[101782]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:02:15 compute-0 systemd[1]: Reloading.
Oct 08 16:02:15 compute-0 systemd-rc-local-generator[101813]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:02:15 compute-0 systemd-sysv-generator[101817]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:02:15 compute-0 systemd-logind[847]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 08 16:02:15 compute-0 systemd-logind[847]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 08 16:02:16 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 08 16:02:16 compute-0 systemd[1]: Starting man-db-cache-update.service...
Oct 08 16:02:16 compute-0 systemd[1]: Reloading.
Oct 08 16:02:16 compute-0 systemd-rc-local-generator[101909]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:02:16 compute-0 systemd-sysv-generator[101912]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:02:16 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 08 16:02:17 compute-0 sudo[101704]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:17 compute-0 sudo[103194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoepvebezeqherhtizdkwhilsjabaymn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939337.35763-1904-129636960362597/AnsiballZ_file.py'
Oct 08 16:02:17 compute-0 sudo[103194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:17 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 08 16:02:17 compute-0 systemd[1]: Finished man-db-cache-update.service.
Oct 08 16:02:17 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.838s CPU time.
Oct 08 16:02:17 compute-0 systemd[1]: run-r14d87ecc4e3b4568bb567fc405d99ca0.service: Deactivated successfully.
Oct 08 16:02:17 compute-0 python3.9[103196]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:02:17 compute-0 sudo[103194]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:18 compute-0 python3.9[103347]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 16:02:19 compute-0 sudo[103501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voymcrsmqpupfhocixakduncnfuxlxix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939339.1377528-1939-270313469343067/AnsiballZ_file.py'
Oct 08 16:02:19 compute-0 sudo[103501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:19 compute-0 python3.9[103503]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:02:19 compute-0 sudo[103501]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:20 compute-0 podman[103580]: 2025-10-08 16:02:20.550155335 +0000 UTC m=+0.136832810 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_controller)
Oct 08 16:02:20 compute-0 sudo[103679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yffyirqplmrpnxuucsrtgnaukzatemmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939340.0440652-1961-250591559825199/AnsiballZ_systemd_service.py'
Oct 08 16:02:20 compute-0 sudo[103679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:21 compute-0 python3.9[103681]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 16:02:21 compute-0 systemd[1]: Reloading.
Oct 08 16:02:21 compute-0 systemd-sysv-generator[103711]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:02:21 compute-0 systemd-rc-local-generator[103708]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:02:21 compute-0 sudo[103679]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:22 compute-0 python3.9[103865]: ansible-ansible.builtin.service_facts Invoked
Oct 08 16:02:22 compute-0 network[103882]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 08 16:02:22 compute-0 network[103883]: 'network-scripts' will be removed from distribution in near future.
Oct 08 16:02:22 compute-0 network[103884]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 08 16:02:26 compute-0 sudo[104159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzfdsiuemlqeptvsmhoivuiqkmhabbas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939346.438216-1999-256731396230093/AnsiballZ_systemd_service.py'
Oct 08 16:02:26 compute-0 sudo[104159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:27 compute-0 python3.9[104161]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:02:27 compute-0 sudo[104159]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:27 compute-0 sudo[104312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxrvblcybzucokwgjjneeuztssbbrtcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939347.3040278-1999-150143563150285/AnsiballZ_systemd_service.py'
Oct 08 16:02:27 compute-0 sudo[104312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:27 compute-0 python3.9[104314]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:02:28 compute-0 sudo[104312]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:28 compute-0 sudo[104465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grlvdkztsmarfpccfruawowtygnujpuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939348.1455138-1999-67242235257441/AnsiballZ_systemd_service.py'
Oct 08 16:02:28 compute-0 sudo[104465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:28 compute-0 python3.9[104467]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:02:28 compute-0 sudo[104465]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:29 compute-0 sudo[104618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gblrgucqqvyjxervlnkuikzcqqfkdzpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939348.9143898-1999-179930573010346/AnsiballZ_systemd_service.py'
Oct 08 16:02:29 compute-0 sudo[104618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:29 compute-0 python3.9[104620]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:02:29 compute-0 sudo[104618]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:30 compute-0 sudo[104771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdclyqrkcwuwuiyxpprbvkoavbbajimj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939349.7423224-1999-132065028462862/AnsiballZ_systemd_service.py'
Oct 08 16:02:30 compute-0 sudo[104771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:30 compute-0 python3.9[104773]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:02:30 compute-0 sudo[104771]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:30 compute-0 sudo[104924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmmuufyueocnamgfjchbphvxefkkkqky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939350.5751302-1999-221431686822114/AnsiballZ_systemd_service.py'
Oct 08 16:02:30 compute-0 sudo[104924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:31 compute-0 python3.9[104926]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:02:31 compute-0 sudo[104924]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:31 compute-0 podman[104928]: 2025-10-08 16:02:31.366821682 +0000 UTC m=+0.100508115 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, container_name=multipathd, org.label-schema.build-date=20251007)
Oct 08 16:02:31 compute-0 sudo[105098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgzmtshwbemwzqqnxornudhoawxuszks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939351.4161565-1999-239115139833187/AnsiballZ_systemd_service.py'
Oct 08 16:02:31 compute-0 sudo[105098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:32 compute-0 python3.9[105100]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:02:32 compute-0 sudo[105098]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:32 compute-0 sudo[105251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irmtjqilbagkilmuaafqxsoomwqpknzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939352.1987104-1999-139275928605818/AnsiballZ_systemd_service.py'
Oct 08 16:02:32 compute-0 sudo[105251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:32 compute-0 python3.9[105253]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:02:32 compute-0 sudo[105251]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:34 compute-0 sudo[105404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nefwjofgxylvhraxkynjesrdbyeodmfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939354.2776542-2117-252879485904377/AnsiballZ_file.py'
Oct 08 16:02:34 compute-0 sudo[105404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:34 compute-0 python3.9[105406]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:02:34 compute-0 sudo[105404]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:35 compute-0 sudo[105556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqnkxhbyutfkaxnmxhvzmgrcmcaidlts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939355.0259628-2117-266765661564701/AnsiballZ_file.py'
Oct 08 16:02:35 compute-0 sudo[105556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:35 compute-0 python3.9[105558]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:02:35 compute-0 sudo[105556]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:36 compute-0 sudo[105708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npqplbhhtovfsqnobfwpyoudnczxcnhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939355.7631547-2117-275658887960602/AnsiballZ_file.py'
Oct 08 16:02:36 compute-0 sudo[105708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:36 compute-0 python3.9[105710]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:02:36 compute-0 sudo[105708]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:36 compute-0 sudo[105860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqkqwjrnqotbpuzkoblxiadhootyqvny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939356.422363-2117-19158139824348/AnsiballZ_file.py'
Oct 08 16:02:36 compute-0 sudo[105860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:36 compute-0 python3.9[105862]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:02:36 compute-0 sudo[105860]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:37 compute-0 sudo[106012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugmvdcdpdocoyykhjezpayttgdkzkapt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939357.1180668-2117-213770490569540/AnsiballZ_file.py'
Oct 08 16:02:37 compute-0 sudo[106012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:37 compute-0 python3.9[106014]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:02:37 compute-0 sudo[106012]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:38 compute-0 sudo[106164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzckwegzmnxlvzcgwaypaidankeibbly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939357.8550496-2117-54634108499286/AnsiballZ_file.py'
Oct 08 16:02:38 compute-0 sudo[106164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:38 compute-0 python3.9[106166]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:02:38 compute-0 sudo[106164]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:38 compute-0 sudo[106316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ambwkcyufeeqkspjthxkwkkysszcvjmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939358.4548302-2117-177312175721466/AnsiballZ_file.py'
Oct 08 16:02:38 compute-0 sudo[106316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:39 compute-0 python3.9[106318]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:02:39 compute-0 sudo[106316]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:39 compute-0 sudo[106468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahyqznncgcxrfpliqtoscsbllqasmlwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939359.1672957-2117-143876747664616/AnsiballZ_file.py'
Oct 08 16:02:39 compute-0 sudo[106468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:39 compute-0 python3.9[106470]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:02:39 compute-0 sudo[106468]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:40 compute-0 sudo[106631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtyzdqztkhpvphmjbuhmvipbmcgfjwyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939359.8483217-2231-186858959853756/AnsiballZ_file.py'
Oct 08 16:02:40 compute-0 sudo[106631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:40 compute-0 podman[106594]: 2025-10-08 16:02:40.173306698 +0000 UTC m=+0.064470518 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 16:02:40 compute-0 python3.9[106639]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:02:40 compute-0 sudo[106631]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:40 compute-0 sudo[106790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlzjhggehapekcnlrfkbmdxyxoiyrmex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939360.534519-2231-97443910944363/AnsiballZ_file.py'
Oct 08 16:02:40 compute-0 sudo[106790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:41 compute-0 python3.9[106792]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:02:41 compute-0 sudo[106790]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:41 compute-0 sudo[106942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvksckmrccwkbcmvqiejtjkxhmivagzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939361.2470403-2231-62064836098224/AnsiballZ_file.py'
Oct 08 16:02:41 compute-0 sudo[106942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:41 compute-0 python3.9[106944]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:02:41 compute-0 sudo[106942]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:02:41.858 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:02:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:02:41.859 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:02:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:02:41.859 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:02:42 compute-0 sudo[107095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvjfhnynzelkepjwdnmcyljikijdwkct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939361.9605768-2231-206638971043214/AnsiballZ_file.py'
Oct 08 16:02:42 compute-0 sudo[107095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:42 compute-0 python3.9[107097]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:02:42 compute-0 sudo[107095]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:42 compute-0 sudo[107247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fujhwalgtdkqqeuimndwfobcpdmozqth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939362.6403947-2231-66885848470069/AnsiballZ_file.py'
Oct 08 16:02:42 compute-0 sudo[107247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:43 compute-0 python3.9[107249]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:02:43 compute-0 sudo[107247]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:43 compute-0 sudo[107399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axywolqpteyocmkigytgrfgdigucfrmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939363.2565324-2231-45533781523228/AnsiballZ_file.py'
Oct 08 16:02:43 compute-0 sudo[107399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:43 compute-0 python3.9[107401]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:02:43 compute-0 sudo[107399]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:44 compute-0 sudo[107551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akcusgdezixzkxhoqufjhfbaypmkvsfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939363.9163837-2231-26794987442650/AnsiballZ_file.py'
Oct 08 16:02:44 compute-0 sudo[107551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:44 compute-0 python3.9[107553]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:02:44 compute-0 sudo[107551]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:44 compute-0 sudo[107717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayimayjlejsonzsjiqihdidkgwvkkecu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939364.6079009-2231-199219119761429/AnsiballZ_file.py'
Oct 08 16:02:44 compute-0 sudo[107717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:44 compute-0 podman[107677]: 2025-10-08 16:02:44.947946544 +0000 UTC m=+0.083280485 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 16:02:45 compute-0 python3.9[107726]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:02:45 compute-0 sudo[107717]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:45 compute-0 sudo[107876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfhfmebfzbmuizectxkkkffjabaiodxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939365.3557239-2347-218029666999332/AnsiballZ_command.py'
Oct 08 16:02:45 compute-0 sudo[107876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:45 compute-0 python3.9[107878]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 16:02:45 compute-0 sudo[107876]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:46 compute-0 python3.9[108030]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 08 16:02:47 compute-0 sudo[108181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaujrxvlaczjhbrrsqfnauxilvttdszc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939367.133559-2383-219763524087586/AnsiballZ_systemd_service.py'
Oct 08 16:02:47 compute-0 sudo[108181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:47 compute-0 python3.9[108183]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 16:02:47 compute-0 systemd[1]: Reloading.
Oct 08 16:02:47 compute-0 systemd-rc-local-generator[108206]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:02:47 compute-0 systemd-sysv-generator[108209]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:02:48 compute-0 sudo[108181]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:48 compute-0 sudo[108369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydhqrbscrctxvdstrirnvtyiwgrsamgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939368.314276-2399-178155806411049/AnsiballZ_command.py'
Oct 08 16:02:48 compute-0 sudo[108369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:48 compute-0 python3.9[108371]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 16:02:48 compute-0 sudo[108369]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:49 compute-0 sudo[108522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhctngsbzxllkgbnklbgsmfrrwqpxymc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939368.9934273-2399-176180145970458/AnsiballZ_command.py'
Oct 08 16:02:49 compute-0 sudo[108522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:49 compute-0 python3.9[108524]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 16:02:49 compute-0 sudo[108522]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:49 compute-0 sudo[108675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usfmmcylrbwzosljfexaytjlkqcxkmgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939369.5977323-2399-182875975361246/AnsiballZ_command.py'
Oct 08 16:02:49 compute-0 sudo[108675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:50 compute-0 python3.9[108677]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 16:02:50 compute-0 sudo[108675]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:50 compute-0 sudo[108828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epxxbbykjdjbxrvurynnmcbzzwulrjpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939370.2303488-2399-216183236996381/AnsiballZ_command.py'
Oct 08 16:02:50 compute-0 sudo[108828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:50 compute-0 python3.9[108830]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 16:02:50 compute-0 sudo[108828]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:50 compute-0 podman[108832]: 2025-10-08 16:02:50.852806557 +0000 UTC m=+0.132742653 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4)
Oct 08 16:02:51 compute-0 sudo[109007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsqsfvbytvbpaytaxdaspixmepijlpih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939370.8866374-2399-235375428880769/AnsiballZ_command.py'
Oct 08 16:02:51 compute-0 sudo[109007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:51 compute-0 python3.9[109009]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 16:02:51 compute-0 sudo[109007]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:51 compute-0 sudo[109160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbmsjbuqqsupeguloojeoxudmswybsql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939371.5488539-2399-200786376881148/AnsiballZ_command.py'
Oct 08 16:02:51 compute-0 sudo[109160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:52 compute-0 python3.9[109162]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 16:02:52 compute-0 sudo[109160]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:52 compute-0 sudo[109313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vobusssyzvpqvaytpcumulmmlqiwunjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939372.2549496-2399-63932690216743/AnsiballZ_command.py'
Oct 08 16:02:52 compute-0 sudo[109313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:52 compute-0 python3.9[109315]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 16:02:52 compute-0 sudo[109313]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:53 compute-0 sudo[109466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjdraaiwqlxenczgfvmtvujbbxkoogmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939372.9051814-2399-47750169793199/AnsiballZ_command.py'
Oct 08 16:02:53 compute-0 sudo[109466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:53 compute-0 python3.9[109468]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 16:02:53 compute-0 sudo[109466]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:54 compute-0 sudo[109619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmvlfcjcneogbftgcawsepiqzzvomsit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939374.677742-2542-97532190865432/AnsiballZ_file.py'
Oct 08 16:02:54 compute-0 sudo[109619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:55 compute-0 python3.9[109621]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:02:55 compute-0 sudo[109619]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:55 compute-0 sudo[109771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huctamicbsaaorpipkbynsplsxrmoarj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939375.3666575-2542-257085319994822/AnsiballZ_file.py'
Oct 08 16:02:55 compute-0 sudo[109771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:55 compute-0 python3.9[109773]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:02:55 compute-0 sudo[109771]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:56 compute-0 sudo[109923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cewdauawsdejhjrkrglxoxzdlgffemcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939376.0210364-2542-81297250067847/AnsiballZ_file.py'
Oct 08 16:02:56 compute-0 sudo[109923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:56 compute-0 python3.9[109925]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:02:56 compute-0 sudo[109923]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:57 compute-0 sudo[110075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fspzlewfgqytlulqrphmtksuxjgbkabw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939376.8703616-2586-151677114652909/AnsiballZ_file.py'
Oct 08 16:02:57 compute-0 sudo[110075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:57 compute-0 python3.9[110077]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:02:57 compute-0 sudo[110075]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:57 compute-0 sudo[110227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iewiuaavpxvzmlwhtkdwcftacrnawqif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939377.5440576-2586-144197315045634/AnsiballZ_file.py'
Oct 08 16:02:57 compute-0 sudo[110227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:58 compute-0 python3.9[110229]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:02:58 compute-0 sudo[110227]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:58 compute-0 sudo[110379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyfsggmzebrnifszxydqdnpchczeogsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939378.2225277-2586-265553806082513/AnsiballZ_file.py'
Oct 08 16:02:58 compute-0 sudo[110379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:58 compute-0 python3.9[110381]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:02:58 compute-0 sudo[110379]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:59 compute-0 sudo[110531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmflgrmixwhozawhryfqvarbzsiutjgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939378.9552815-2586-199235346726030/AnsiballZ_file.py'
Oct 08 16:02:59 compute-0 sudo[110531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:02:59 compute-0 python3.9[110533]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:02:59 compute-0 sudo[110531]: pam_unix(sudo:session): session closed for user root
Oct 08 16:02:59 compute-0 sudo[110683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxfdmhcjhzksjhtgwccvqbieccwvzqfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939379.5925317-2586-194959178828741/AnsiballZ_file.py'
Oct 08 16:02:59 compute-0 sudo[110683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:00 compute-0 python3.9[110685]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:03:00 compute-0 sudo[110683]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:00 compute-0 sudo[110835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwmivtlnjyonuokwbrdqqdxaauglnjaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939380.3035834-2586-96128839716668/AnsiballZ_file.py'
Oct 08 16:03:00 compute-0 sudo[110835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:00 compute-0 python3.9[110837]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:03:00 compute-0 sudo[110835]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:01 compute-0 sudo[110987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhoxicaftfkljfkdaiymjnfwdvrnhpon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939380.968705-2586-121122142245785/AnsiballZ_file.py'
Oct 08 16:03:01 compute-0 sudo[110987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:01 compute-0 python3.9[110989]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:03:01 compute-0 sudo[110987]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:01 compute-0 podman[110990]: 2025-10-08 16:03:01.559952701 +0000 UTC m=+0.069521268 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Oct 08 16:03:01 compute-0 sudo[111157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrhijobwdfvzmcsqxrpqaglatzorqatm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939381.6560054-2586-117416122730334/AnsiballZ_file.py'
Oct 08 16:03:01 compute-0 sudo[111157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:02 compute-0 python3.9[111159]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:03:02 compute-0 sudo[111157]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:02 compute-0 sudo[111309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqsdcpdwluqdhkbodvhildiybivrcsbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939382.3695993-2586-71558886230038/AnsiballZ_file.py'
Oct 08 16:03:02 compute-0 sudo[111309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:02 compute-0 python3.9[111311]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:03:02 compute-0 sudo[111309]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:07 compute-0 sudo[111461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plnyimyavuntswgwknswuorxrjjwgoxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939387.0463989-2851-257899429508943/AnsiballZ_getent.py'
Oct 08 16:03:07 compute-0 sudo[111461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:07 compute-0 python3.9[111463]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 08 16:03:07 compute-0 sudo[111461]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:08 compute-0 sudo[111614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjcvlkrjteztgxtdrazzolwwxldzxywr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939387.8916202-2867-281421153961917/AnsiballZ_group.py'
Oct 08 16:03:08 compute-0 sudo[111614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:08 compute-0 python3.9[111616]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 08 16:03:08 compute-0 groupadd[111617]: group added to /etc/group: name=nova, GID=42436
Oct 08 16:03:08 compute-0 groupadd[111617]: group added to /etc/gshadow: name=nova
Oct 08 16:03:08 compute-0 groupadd[111617]: new group: name=nova, GID=42436
Oct 08 16:03:08 compute-0 sudo[111614]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:09 compute-0 sudo[111772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yffayqcminptdhqsfzmedfawdjgjovlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939388.8372433-2883-217515054979599/AnsiballZ_user.py'
Oct 08 16:03:09 compute-0 sudo[111772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:09 compute-0 python3.9[111774]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 08 16:03:09 compute-0 useradd[111776]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Oct 08 16:03:09 compute-0 useradd[111776]: add 'nova' to group 'libvirt'
Oct 08 16:03:09 compute-0 useradd[111776]: add 'nova' to shadow group 'libvirt'
Oct 08 16:03:09 compute-0 sudo[111772]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:10 compute-0 podman[111807]: 2025-10-08 16:03:10.447266101 +0000 UTC m=+0.056055399 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible)
Oct 08 16:03:10 compute-0 sshd-session[111827]: Accepted publickey for zuul from 192.168.122.30 port 49284 ssh2: ECDSA SHA256:ZIjHNHNxAuv0z7dTwV8SzPT4xe1+IFvqH/0VmHWdIl4
Oct 08 16:03:10 compute-0 systemd-logind[847]: New session 10 of user zuul.
Oct 08 16:03:10 compute-0 systemd[1]: Started Session 10 of User zuul.
Oct 08 16:03:10 compute-0 sshd-session[111827]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 16:03:10 compute-0 sshd-session[111830]: Received disconnect from 192.168.122.30 port 49284:11: disconnected by user
Oct 08 16:03:10 compute-0 sshd-session[111830]: Disconnected from user zuul 192.168.122.30 port 49284
Oct 08 16:03:10 compute-0 sshd-session[111827]: pam_unix(sshd:session): session closed for user zuul
Oct 08 16:03:10 compute-0 systemd-logind[847]: Session 10 logged out. Waiting for processes to exit.
Oct 08 16:03:10 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Oct 08 16:03:10 compute-0 systemd-logind[847]: Removed session 10.
Oct 08 16:03:11 compute-0 python3.9[111980]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:03:12 compute-0 python3.9[112101]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759939391.1208947-2933-172233602316047/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:03:12 compute-0 python3.9[112251]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:03:13 compute-0 python3.9[112328]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:03:14 compute-0 python3.9[112478]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:03:14 compute-0 python3.9[112599]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759939393.4964385-2933-183945556992884/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:03:15 compute-0 podman[112723]: 2025-10-08 16:03:15.167962862 +0000 UTC m=+0.060032542 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 08 16:03:15 compute-0 python3.9[112759]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:03:15 compute-0 python3.9[112890]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759939394.8300986-2933-159345546623788/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:03:16 compute-0 python3.9[113040]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:03:17 compute-0 python3.9[113161]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759939396.0742123-2933-71764124719234/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:03:17 compute-0 sudo[113311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqkgmhvcddfrfugkeoihwxlvisscqxkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939397.3718288-3071-74448598109873/AnsiballZ_file.py'
Oct 08 16:03:17 compute-0 sudo[113311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:17 compute-0 python3.9[113313]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:03:17 compute-0 sudo[113311]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:18 compute-0 sudo[113463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuqdsqqirpzleeoytberajpiyjdsprbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939398.0986192-3087-110310134903238/AnsiballZ_copy.py'
Oct 08 16:03:18 compute-0 sudo[113463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:18 compute-0 python3.9[113465]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:03:18 compute-0 sudo[113463]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:19 compute-0 sudo[113615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whhycinuinpjbknrkebtwgdohciejkbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939398.7652392-3103-110400533869239/AnsiballZ_stat.py'
Oct 08 16:03:19 compute-0 sudo[113615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:19 compute-0 python3.9[113617]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:03:19 compute-0 sudo[113615]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:19 compute-0 sudo[113767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phgfkkaznjbnhbjcczdswhhehfzqowxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939399.5270503-3119-166402683155391/AnsiballZ_stat.py'
Oct 08 16:03:19 compute-0 sudo[113767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:19 compute-0 python3.9[113769]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:03:20 compute-0 sudo[113767]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:20 compute-0 sudo[113890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycoudkytrcbqttngpllfzbgciuerqlzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939399.5270503-3119-166402683155391/AnsiballZ_copy.py'
Oct 08 16:03:20 compute-0 sudo[113890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:20 compute-0 python3.9[113892]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1759939399.5270503-3119-166402683155391/.source _original_basename=.878w279r follow=False checksum=9c785e6fd1f567a7de2cc5d308cea5ff01ec21ff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 08 16:03:20 compute-0 sudo[113890]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:21 compute-0 podman[114018]: 2025-10-08 16:03:21.329298284 +0000 UTC m=+0.150278591 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=ovn_controller, io.buildah.version=1.41.4, tcib_managed=true)
Oct 08 16:03:21 compute-0 python3.9[114060]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:03:22 compute-0 python3.9[114222]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:03:22 compute-0 python3.9[114343]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759939401.6253524-3171-14357931303077/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=03795cc2de3dc880d6bed514541de74c0149ff02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:03:23 compute-0 python3.9[114493]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:03:23 compute-0 python3.9[114614]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759939402.8965938-3201-222508351063752/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=50e866c8689f0d34514b5d1ea3fe3ffb51869e6f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:03:24 compute-0 sudo[114764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxniaebwskvoovfruazamlolsfkllzvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939404.2974913-3235-201346252263892/AnsiballZ_container_config_data.py'
Oct 08 16:03:24 compute-0 sudo[114764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:24 compute-0 python3.9[114766]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 08 16:03:24 compute-0 sudo[114764]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:25 compute-0 sudo[114916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yereupidmogbeogsgweathnwocovqyde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939405.1195617-3253-107430354791391/AnsiballZ_container_config_hash.py'
Oct 08 16:03:25 compute-0 sudo[114916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:25 compute-0 python3.9[114918]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 08 16:03:25 compute-0 sudo[114916]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:26 compute-0 sudo[115068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndrlahhmdckuyhkzrthxiwrgwfuxspjh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759939406.156335-3273-76127687688357/AnsiballZ_edpm_container_manage.py'
Oct 08 16:03:26 compute-0 sudo[115068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:26 compute-0 python3[115070]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 08 16:03:27 compute-0 podman[115106]: 2025-10-08 16:03:26.960043071 +0000 UTC m=+0.022258783 image pull 832c3785e85c4c662af44aedbc2f19929fad0c8bb172c8f5e1173336595805dc 38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest
Oct 08 16:03:27 compute-0 podman[115106]: 2025-10-08 16:03:27.079669678 +0000 UTC m=+0.141885370 container create 87638364a1ef37ad75d7687f27d36be07e9707978753d1490f58670df2ef698d (image=38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute_init, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.4, container_name=nova_compute_init, org.label-schema.vendor=CentOS, config_data={'image': '38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:03:27 compute-0 python3[115070]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': '38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z 38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 08 16:03:27 compute-0 sudo[115068]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:28 compute-0 sudo[115295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnijvwgbxtlpgpevneaflouestpmefvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939407.8178298-3289-94004120098742/AnsiballZ_stat.py'
Oct 08 16:03:28 compute-0 sudo[115295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:28 compute-0 python3.9[115297]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:03:28 compute-0 sudo[115295]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:29 compute-0 sudo[115449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcqtjooldqtmvbngnpupqdpwtoatyxow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939408.928726-3313-54054056485972/AnsiballZ_container_config_data.py'
Oct 08 16:03:29 compute-0 sudo[115449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:29 compute-0 python3.9[115451]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 08 16:03:29 compute-0 sudo[115449]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:30 compute-0 sudo[115601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inztdyhpmqfxobjmzxdormifgcrbcsge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939409.7861886-3331-13465928545778/AnsiballZ_container_config_hash.py'
Oct 08 16:03:30 compute-0 sudo[115601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:30 compute-0 python3.9[115603]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 08 16:03:30 compute-0 sudo[115601]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:30 compute-0 sudo[115753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yedfmbkrtglvyhnsuzaxhmgheabzlxev ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759939410.6252801-3351-135632267115722/AnsiballZ_edpm_container_manage.py'
Oct 08 16:03:30 compute-0 sudo[115753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:31 compute-0 python3[115755]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 08 16:03:31 compute-0 podman[115789]: 2025-10-08 16:03:31.465442684 +0000 UTC m=+0.048040402 container create bfd47280e349e87b3238cb52138641c01a26712a475910fb89656c96a06e46bb (image=38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.vendor=CentOS, config_id=edpm, config_data={'image': '38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251007, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4)
Oct 08 16:03:31 compute-0 podman[115789]: 2025-10-08 16:03:31.442470448 +0000 UTC m=+0.025068196 image pull 832c3785e85c4c662af44aedbc2f19929fad0c8bb172c8f5e1173336595805dc 38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest
Oct 08 16:03:31 compute-0 python3[115755]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': '38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro 38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest kolla_start
Oct 08 16:03:31 compute-0 sudo[115753]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:32 compute-0 sudo[115991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdlyzbimmmnyrdelcrcedocxibuoomgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939411.9328122-3367-177752946996822/AnsiballZ_stat.py'
Oct 08 16:03:32 compute-0 sudo[115991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:32 compute-0 podman[115951]: 2025-10-08 16:03:32.251146805 +0000 UTC m=+0.076007285 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=watcher_latest, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 08 16:03:32 compute-0 python3.9[115996]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:03:32 compute-0 sudo[115991]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:33 compute-0 sudo[116148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hikpxurwgdmhjscnspanoibqpprgdqgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939412.7064364-3385-178179759104730/AnsiballZ_file.py'
Oct 08 16:03:33 compute-0 sudo[116148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:33 compute-0 python3.9[116150]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:03:33 compute-0 sudo[116148]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:33 compute-0 sudo[116299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtyudxxamxwjcmbldmwzncbzojtcjkha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939413.2846928-3385-262676630933944/AnsiballZ_copy.py'
Oct 08 16:03:33 compute-0 sudo[116299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:33 compute-0 python3.9[116301]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759939413.2846928-3385-262676630933944/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:03:33 compute-0 sudo[116299]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:34 compute-0 sudo[116375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abnalmjutpuqkgrjlvugidqsicuiryot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939413.2846928-3385-262676630933944/AnsiballZ_systemd.py'
Oct 08 16:03:34 compute-0 sudo[116375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:34 compute-0 python3.9[116377]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 16:03:34 compute-0 systemd[1]: Reloading.
Oct 08 16:03:34 compute-0 systemd-sysv-generator[116408]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:03:34 compute-0 systemd-rc-local-generator[116404]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:03:34 compute-0 sudo[116375]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:35 compute-0 sudo[116485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odgtvsbvyftyyvyxrxjvlualkjjvooeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939413.2846928-3385-262676630933944/AnsiballZ_systemd.py'
Oct 08 16:03:35 compute-0 sudo[116485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:35 compute-0 python3.9[116487]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:03:35 compute-0 systemd[1]: Reloading.
Oct 08 16:03:35 compute-0 systemd-rc-local-generator[116518]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:03:35 compute-0 systemd-sysv-generator[116522]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:03:35 compute-0 systemd[1]: Starting nova_compute container...
Oct 08 16:03:35 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662741ccc1749da177ce2f17ca7a121d6d4060ccf9a6fc232fd453639d8a587e/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 08 16:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662741ccc1749da177ce2f17ca7a121d6d4060ccf9a6fc232fd453639d8a587e/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 08 16:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662741ccc1749da177ce2f17ca7a121d6d4060ccf9a6fc232fd453639d8a587e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 08 16:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662741ccc1749da177ce2f17ca7a121d6d4060ccf9a6fc232fd453639d8a587e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 08 16:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662741ccc1749da177ce2f17ca7a121d6d4060ccf9a6fc232fd453639d8a587e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 08 16:03:35 compute-0 podman[116528]: 2025-10-08 16:03:35.985435489 +0000 UTC m=+0.143766850 container init bfd47280e349e87b3238cb52138641c01a26712a475910fb89656c96a06e46bb (image=38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, config_data={'image': '38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Oct 08 16:03:36 compute-0 podman[116528]: 2025-10-08 16:03:36.00672535 +0000 UTC m=+0.165056691 container start bfd47280e349e87b3238cb52138641c01a26712a475910fb89656c96a06e46bb (image=38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'image': '38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.4, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:03:36 compute-0 podman[116528]: nova_compute
Oct 08 16:03:36 compute-0 nova_compute[116543]: + sudo -E kolla_set_configs
Oct 08 16:03:36 compute-0 systemd[1]: Started nova_compute container.
Oct 08 16:03:36 compute-0 sudo[116485]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Validating config file
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Copying service configuration files
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Deleting /etc/ceph
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Creating directory /etc/ceph
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Setting permission for /etc/ceph
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Writing out command to execute
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 08 16:03:36 compute-0 nova_compute[116543]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 08 16:03:36 compute-0 nova_compute[116543]: ++ cat /run_command
Oct 08 16:03:36 compute-0 nova_compute[116543]: + CMD=nova-compute
Oct 08 16:03:36 compute-0 nova_compute[116543]: + ARGS=
Oct 08 16:03:36 compute-0 nova_compute[116543]: + sudo kolla_copy_cacerts
Oct 08 16:03:36 compute-0 nova_compute[116543]: + [[ ! -n '' ]]
Oct 08 16:03:36 compute-0 nova_compute[116543]: + . kolla_extend_start
Oct 08 16:03:36 compute-0 nova_compute[116543]: Running command: 'nova-compute'
Oct 08 16:03:36 compute-0 nova_compute[116543]: + echo 'Running command: '\''nova-compute'\'''
Oct 08 16:03:36 compute-0 nova_compute[116543]: + umask 0022
Oct 08 16:03:36 compute-0 nova_compute[116543]: + exec nova-compute
Oct 08 16:03:37 compute-0 python3.9[116704]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:03:37 compute-0 python3.9[116854]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:03:38 compute-0 python3.9[117004]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:03:39 compute-0 sudo[117156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozhxzdvgvushdtbyeudoruvrarrctcsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939418.9265141-3505-26314553648863/AnsiballZ_podman_container.py'
Oct 08 16:03:39 compute-0 sudo[117156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:39 compute-0 nova_compute[116543]: 2025-10-08 16:03:39.226 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.12/site-packages/os_vif/__init__.py:44
Oct 08 16:03:39 compute-0 nova_compute[116543]: 2025-10-08 16:03:39.227 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.12/site-packages/os_vif/__init__.py:44
Oct 08 16:03:39 compute-0 nova_compute[116543]: 2025-10-08 16:03:39.227 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.12/site-packages/os_vif/__init__.py:44
Oct 08 16:03:39 compute-0 nova_compute[116543]: 2025-10-08 16:03:39.227 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 08 16:03:39 compute-0 python3.9[117158]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 08 16:03:39 compute-0 nova_compute[116543]: 2025-10-08 16:03:39.483 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:03:39 compute-0 nova_compute[116543]: 2025-10-08 16:03:39.515 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.032s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:03:39 compute-0 rsyslogd[1296]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 16:03:39 compute-0 rsyslogd[1296]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 16:03:39 compute-0 sudo[117156]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:39 compute-0 nova_compute[116543]: 2025-10-08 16:03:39.607 2 INFO oslo_service.periodic_task [-] Skipping periodic task _heal_instance_info_cache because its interval is negative
Oct 08 16:03:39 compute-0 nova_compute[116543]: 2025-10-08 16:03:39.609 2 WARNING oslo_config.cfg [None req-5a277280-7125-4f84-8839-487bf7f2780d - - - - - -] Deprecated: Option "heartbeat_in_pthread" from group "oslo_messaging_rabbit" is deprecated for removal (The option is related to Eventlet which will be removed. In addition this has never worked as expected with services using eventlet for core service framework.).  Its value may be silently ignored in the future.
Oct 08 16:03:40 compute-0 sudo[117331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-namtkhxazmhlffiptiyjhrzhwrilowxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939419.8007548-3521-75575067991265/AnsiballZ_systemd.py'
Oct 08 16:03:40 compute-0 sudo[117331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:40 compute-0 python3.9[117333]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 16:03:40 compute-0 systemd[1]: Stopping nova_compute container...
Oct 08 16:03:40 compute-0 systemd[1]: libpod-bfd47280e349e87b3238cb52138641c01a26712a475910fb89656c96a06e46bb.scope: Deactivated successfully.
Oct 08 16:03:40 compute-0 systemd[1]: libpod-bfd47280e349e87b3238cb52138641c01a26712a475910fb89656c96a06e46bb.scope: Consumed 2.883s CPU time.
Oct 08 16:03:40 compute-0 podman[117337]: 2025-10-08 16:03:40.559331497 +0000 UTC m=+0.058238104 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:03:40 compute-0 podman[117338]: 2025-10-08 16:03:40.564447774 +0000 UTC m=+0.061262011 container died bfd47280e349e87b3238cb52138641c01a26712a475910fb89656c96a06e46bb (image=38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': '38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=nova_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, tcib_managed=true, config_id=edpm)
Oct 08 16:03:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bfd47280e349e87b3238cb52138641c01a26712a475910fb89656c96a06e46bb-userdata-shm.mount: Deactivated successfully.
Oct 08 16:03:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-662741ccc1749da177ce2f17ca7a121d6d4060ccf9a6fc232fd453639d8a587e-merged.mount: Deactivated successfully.
Oct 08 16:03:40 compute-0 podman[117338]: 2025-10-08 16:03:40.640884469 +0000 UTC m=+0.137698696 container cleanup bfd47280e349e87b3238cb52138641c01a26712a475910fb89656c96a06e46bb (image=38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'image': '38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.schema-version=1.0)
Oct 08 16:03:40 compute-0 podman[117338]: nova_compute
Oct 08 16:03:40 compute-0 podman[117383]: nova_compute
Oct 08 16:03:40 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 08 16:03:40 compute-0 systemd[1]: Stopped nova_compute container.
Oct 08 16:03:40 compute-0 systemd[1]: Starting nova_compute container...
Oct 08 16:03:40 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662741ccc1749da177ce2f17ca7a121d6d4060ccf9a6fc232fd453639d8a587e/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 08 16:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662741ccc1749da177ce2f17ca7a121d6d4060ccf9a6fc232fd453639d8a587e/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 08 16:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662741ccc1749da177ce2f17ca7a121d6d4060ccf9a6fc232fd453639d8a587e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 08 16:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662741ccc1749da177ce2f17ca7a121d6d4060ccf9a6fc232fd453639d8a587e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 08 16:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662741ccc1749da177ce2f17ca7a121d6d4060ccf9a6fc232fd453639d8a587e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 08 16:03:40 compute-0 podman[117397]: 2025-10-08 16:03:40.839540114 +0000 UTC m=+0.097615894 container init bfd47280e349e87b3238cb52138641c01a26712a475910fb89656c96a06e46bb (image=38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute, tcib_build_tag=watcher_latest, config_data={'image': '38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 08 16:03:40 compute-0 podman[117397]: 2025-10-08 16:03:40.846647679 +0000 UTC m=+0.104723439 container start bfd47280e349e87b3238cb52138641c01a26712a475910fb89656c96a06e46bb (image=38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'image': '38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:03:40 compute-0 nova_compute[117413]: + sudo -E kolla_set_configs
Oct 08 16:03:40 compute-0 podman[117397]: nova_compute
Oct 08 16:03:40 compute-0 systemd[1]: Started nova_compute container.
Oct 08 16:03:40 compute-0 sudo[117331]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Validating config file
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Copying service configuration files
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Deleting /etc/ceph
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Creating directory /etc/ceph
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Setting permission for /etc/ceph
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Writing out command to execute
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 08 16:03:40 compute-0 nova_compute[117413]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 08 16:03:40 compute-0 nova_compute[117413]: ++ cat /run_command
Oct 08 16:03:40 compute-0 nova_compute[117413]: + CMD=nova-compute
Oct 08 16:03:40 compute-0 nova_compute[117413]: + ARGS=
Oct 08 16:03:40 compute-0 nova_compute[117413]: + sudo kolla_copy_cacerts
Oct 08 16:03:40 compute-0 nova_compute[117413]: + [[ ! -n '' ]]
Oct 08 16:03:40 compute-0 nova_compute[117413]: + . kolla_extend_start
Oct 08 16:03:40 compute-0 nova_compute[117413]: Running command: 'nova-compute'
Oct 08 16:03:40 compute-0 nova_compute[117413]: + echo 'Running command: '\''nova-compute'\'''
Oct 08 16:03:40 compute-0 nova_compute[117413]: + umask 0022
Oct 08 16:03:40 compute-0 nova_compute[117413]: + exec nova-compute
Oct 08 16:03:41 compute-0 sudo[117574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dypwyutiujftgcnlabuknraibusmbrdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939421.1176708-3539-78187635149541/AnsiballZ_podman_container.py'
Oct 08 16:03:41 compute-0 sudo[117574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:41 compute-0 python3.9[117576]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 08 16:03:41 compute-0 systemd[1]: Started libpod-conmon-87638364a1ef37ad75d7687f27d36be07e9707978753d1490f58670df2ef698d.scope.
Oct 08 16:03:41 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:03:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:03:41.859 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:03:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:03:41.860 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:03:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:03:41.860 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:03:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2f42dd878745ecaa34afba65c729fe7941ef70c5886dfc6ac5858e3923badb4/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 08 16:03:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2f42dd878745ecaa34afba65c729fe7941ef70c5886dfc6ac5858e3923badb4/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 08 16:03:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2f42dd878745ecaa34afba65c729fe7941ef70c5886dfc6ac5858e3923badb4/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 08 16:03:41 compute-0 podman[117601]: 2025-10-08 16:03:41.884031393 +0000 UTC m=+0.131959870 container init 87638364a1ef37ad75d7687f27d36be07e9707978753d1490f58670df2ef698d (image=38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': '38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Oct 08 16:03:41 compute-0 podman[117601]: 2025-10-08 16:03:41.894646028 +0000 UTC m=+0.142574465 container start 87638364a1ef37ad75d7687f27d36be07e9707978753d1490f58670df2ef698d (image=38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'image': '38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=nova_compute_init)
Oct 08 16:03:41 compute-0 python3.9[117576]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 08 16:03:41 compute-0 nova_compute_init[117623]: INFO:nova_statedir:Applying nova statedir ownership
Oct 08 16:03:41 compute-0 nova_compute_init[117623]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 08 16:03:41 compute-0 nova_compute_init[117623]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 08 16:03:41 compute-0 nova_compute_init[117623]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 08 16:03:41 compute-0 nova_compute_init[117623]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 08 16:03:41 compute-0 nova_compute_init[117623]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 08 16:03:41 compute-0 nova_compute_init[117623]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 08 16:03:41 compute-0 nova_compute_init[117623]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 08 16:03:41 compute-0 nova_compute_init[117623]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 08 16:03:41 compute-0 nova_compute_init[117623]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 08 16:03:41 compute-0 nova_compute_init[117623]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 08 16:03:41 compute-0 nova_compute_init[117623]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 08 16:03:41 compute-0 nova_compute_init[117623]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 08 16:03:41 compute-0 nova_compute_init[117623]: INFO:nova_statedir:Nova statedir ownership complete
Oct 08 16:03:41 compute-0 systemd[1]: libpod-87638364a1ef37ad75d7687f27d36be07e9707978753d1490f58670df2ef698d.scope: Deactivated successfully.
Oct 08 16:03:42 compute-0 podman[117643]: 2025-10-08 16:03:42.037700356 +0000 UTC m=+0.032183325 container died 87638364a1ef37ad75d7687f27d36be07e9707978753d1490f58670df2ef698d (image=38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'image': '38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Oct 08 16:03:42 compute-0 sudo[117574]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-87638364a1ef37ad75d7687f27d36be07e9707978753d1490f58670df2ef698d-userdata-shm.mount: Deactivated successfully.
Oct 08 16:03:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2f42dd878745ecaa34afba65c729fe7941ef70c5886dfc6ac5858e3923badb4-merged.mount: Deactivated successfully.
Oct 08 16:03:42 compute-0 podman[117643]: 2025-10-08 16:03:42.077673764 +0000 UTC m=+0.072156713 container cleanup 87638364a1ef37ad75d7687f27d36be07e9707978753d1490f58670df2ef698d (image=38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest, name=nova_compute_init, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute_init, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'image': '38.102.83.163:5001/podified-master-centos10/openstack-nova-compute:watcher_latest', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, config_id=edpm)
Oct 08 16:03:42 compute-0 systemd[1]: libpod-conmon-87638364a1ef37ad75d7687f27d36be07e9707978753d1490f58670df2ef698d.scope: Deactivated successfully.
Oct 08 16:03:43 compute-0 sshd-session[83110]: Connection closed by 192.168.122.30 port 60736
Oct 08 16:03:43 compute-0 sshd-session[83088]: pam_unix(sshd:session): session closed for user zuul
Oct 08 16:03:43 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Oct 08 16:03:43 compute-0 systemd[1]: session-8.scope: Consumed 2min 28.323s CPU time.
Oct 08 16:03:43 compute-0 systemd-logind[847]: Session 8 logged out. Waiting for processes to exit.
Oct 08 16:03:43 compute-0 systemd-logind[847]: Removed session 8.
Oct 08 16:03:43 compute-0 nova_compute[117413]: 2025-10-08 16:03:43.105 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.12/site-packages/os_vif/__init__.py:44
Oct 08 16:03:43 compute-0 nova_compute[117413]: 2025-10-08 16:03:43.106 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.12/site-packages/os_vif/__init__.py:44
Oct 08 16:03:43 compute-0 nova_compute[117413]: 2025-10-08 16:03:43.106 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.12/site-packages/os_vif/__init__.py:44
Oct 08 16:03:43 compute-0 nova_compute[117413]: 2025-10-08 16:03:43.106 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 08 16:03:43 compute-0 nova_compute[117413]: 2025-10-08 16:03:43.302 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:03:43 compute-0 nova_compute[117413]: 2025-10-08 16:03:43.323 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.021s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:03:43 compute-0 nova_compute[117413]: 2025-10-08 16:03:43.361 2 INFO oslo_service.periodic_task [-] Skipping periodic task _heal_instance_info_cache because its interval is negative
Oct 08 16:03:43 compute-0 nova_compute[117413]: 2025-10-08 16:03:43.362 2 WARNING oslo_config.cfg [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] Deprecated: Option "heartbeat_in_pthread" from group "oslo_messaging_rabbit" is deprecated for removal (The option is related to Eventlet which will be removed. In addition this has never worked as expected with services using eventlet for core service framework.).  Its value may be silently ignored in the future.
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.101 2 INFO nova.virt.driver [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.397 2 INFO nova.compute.provider_config [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 08 16:03:45 compute-0 podman[117699]: 2025-10-08 16:03:45.470031317 +0000 UTC m=+0.068409565 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.903 2 DEBUG oslo_concurrency.lockutils [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.904 2 DEBUG oslo_concurrency.lockutils [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.904 2 DEBUG oslo_concurrency.lockutils [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.905 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/service.py:274
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.905 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.905 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.905 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.905 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.905 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.906 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.906 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.906 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.906 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.906 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.906 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.906 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cell_worker_thread_pool_size   = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.907 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.907 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.907 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.907 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.907 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.907 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.908 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.908 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.908 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.908 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.908 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.908 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.908 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.908 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.909 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.909 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] default_green_pool_size        = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.909 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.909 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.909 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] default_thread_pool_size       = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.909 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.910 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.910 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] fatal_deprecations             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.910 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.910 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.910 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.910 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.910 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] heal_instance_info_cache_interval = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.910 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.911 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.911 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.911 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.911 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] injected_network_template      = /usr/lib/python3.12/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.911 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.911 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.911 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.912 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.912 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.912 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.912 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.912 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.912 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.912 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] key                            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.913 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.913 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.913 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.913 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.913 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.913 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.913 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.913 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.913 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.914 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.914 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.914 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.914 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.914 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.914 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.914 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.914 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.915 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.915 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.915 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.915 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.915 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.915 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.915 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.915 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.916 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.916 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.916 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] my_shared_fs_storage_ip        = 192.168.122.100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.916 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.916 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.916 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.916 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.916 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.917 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.917 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.917 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.917 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.917 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] pybasedir                      = /usr/lib/python3.12/site-packages log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.917 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.917 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.917 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.917 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.918 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.918 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.918 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] record                         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.918 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.918 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.918 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.918 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.918 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.919 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.919 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.919 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.919 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.919 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.919 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.919 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.919 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.920 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.920 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.920 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.920 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.920 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.920 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.920 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.921 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.921 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.921 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.921 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.921 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.921 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.921 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.921 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.921 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.922 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.922 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] thread_pool_statistic_period   = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.922 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.922 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.922 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.922 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.922 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.922 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.923 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.923 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.923 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.923 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.923 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.923 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.923 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.923 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.924 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.924 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.924 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.924 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.924 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] os_brick.lock_path             = /var/lib/nova/tmp log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.924 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.924 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.924 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.925 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.925 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.925 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.925 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.925 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.925 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.925 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.926 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.926 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.926 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.926 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.926 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.926 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.926 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.927 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.927 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.927 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.neutron_default_project_id = default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.927 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.response_validation        = warn log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.927 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.927 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.928 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.928 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.928 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.928 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.928 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.928 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.928 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.929 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.929 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.backend_expiration_time  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.929 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.929 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.929 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.929 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.929 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.929 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.930 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.enforce_fips_mode        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.930 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.930 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.930 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.930 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.930 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.memcache_password        = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.930 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.930 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.931 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.931 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.931 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.931 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.931 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.931 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.memcache_username        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.931 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.931 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.redis_db                 = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.932 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.redis_password           = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.932 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.redis_sentinel_service_name = mymaster log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.932 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.redis_sentinels          = ['localhost:26379'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.932 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.redis_server             = localhost:6379 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.932 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.redis_socket_timeout     = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.932 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.redis_username           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.932 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.932 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.933 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.933 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.933 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.933 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.933 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.933 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.933 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.934 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.934 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.934 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.934 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.934 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.934 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.935 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.935 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.935 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.935 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.935 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.935 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.935 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.936 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.936 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.936 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.936 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.936 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.936 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.936 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.937 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.937 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.937 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.937 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.937 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.937 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.937 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] compute.sharing_providers_max_uuids_per_request = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.937 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.938 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.938 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.938 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.938 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.938 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.938 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] consoleauth.enforce_session_timeout = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.938 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.938 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.939 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.939 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.939 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.939 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.939 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.939 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.939 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.939 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.940 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.940 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.940 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cyborg.retriable_status_codes  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.940 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.940 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.940 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.940 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.940 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.941 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.941 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.941 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.941 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] database.asyncio_connection    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.941 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] database.asyncio_slave_connection = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.941 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.941 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.941 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.942 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.942 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.942 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.942 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.942 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.942 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.942 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.942 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.943 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.943 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.943 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.943 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.943 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.943 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.943 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.943 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.944 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api_database.asyncio_connection = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.944 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api_database.asyncio_slave_connection = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.944 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.944 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.944 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.944 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.944 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.944 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.944 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.945 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.945 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.945 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.945 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.945 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.945 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.945 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.945 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.946 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.946 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.946 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.946 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.946 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.946 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.946 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ephemeral_storage_encryption.default_format = luks log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.946 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.947 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.947 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.947 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.947 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.947 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.947 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.947 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.947 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.948 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.948 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.948 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.948 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.950 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.950 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.950 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.950 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.950 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.951 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.951 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.951 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.951 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.951 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.951 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.retriable_status_codes  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.951 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.951 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.952 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.952 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.952 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.952 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.952 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.952 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.952 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.952 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.953 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.auth_section            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.953 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.auth_type               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.953 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.953 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.953 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.953 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.connect_retries         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.953 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.connect_retry_delay     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.953 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.endpoint_override       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.954 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.954 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.954 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.max_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.954 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.min_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.954 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.region_name             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.954 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.retriable_status_codes  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.954 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.service_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.954 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.service_type            = shared-file-system log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.955 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.share_apply_policy_timeout = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.955 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.955 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.status_code_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.955 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.955 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.955 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.955 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] manila.version                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.955 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.956 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.956 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.956 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.956 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.956 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.956 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.957 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.957 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.957 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.957 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.957 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.957 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.957 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.957 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.958 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.conductor_group         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.958 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.958 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.958 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.958 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.958 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.958 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.958 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.958 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.959 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.959 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.retriable_status_codes  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.959 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.959 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.959 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.959 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.shard                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.959 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.959 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.960 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.960 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.960 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.960 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.960 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.960 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.960 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.961 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.961 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.961 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.961 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.961 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.961 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.961 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.961 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.962 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.962 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.962 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.962 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.962 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.962 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.962 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.963 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.963 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.963 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.963 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.963 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.963 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.963 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.963 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.964 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.964 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.964 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vault.approle_role_id          = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.964 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vault.approle_secret_id        = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.964 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.964 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vault.kv_path                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.964 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.964 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.965 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vault.root_token_id            = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.965 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.965 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vault.timeout                  = 60.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.965 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.965 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.965 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.965 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.965 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.966 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.966 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.966 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.966 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.966 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.966 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.966 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.966 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.967 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] keystone.retriable_status_codes = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.967 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.967 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.967 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.967 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.967 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.967 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.967 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.968 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.968 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.ceph_mount_options     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.968 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.ceph_mount_point_base  = /var/lib/nova/mnt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.968 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.968 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.968 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.968 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.969 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.969 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.969 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.969 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.969 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.969 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.969 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.970 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.970 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.970 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.970 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.970 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.970 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.970 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.971 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.971 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.971 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.971 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.971 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.971 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.972 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.972 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.972 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.972 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.972 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.972 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.972 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.972 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.973 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.973 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.973 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.973 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.973 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.973 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.973 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.973 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.974 2 WARNING oslo_config.cfg [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 08 16:03:45 compute-0 nova_compute[117413]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 08 16:03:45 compute-0 nova_compute[117413]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 08 16:03:45 compute-0 nova_compute[117413]: and ``live_migration_inbound_addr`` respectively.
Oct 08 16:03:45 compute-0 nova_compute[117413]: ).  Its value may be silently ignored in the future.
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.974 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.974 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.974 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.974 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.974 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.migration_inbound_addr = 192.168.122.100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.975 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.975 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.975 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.975 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.975 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.975 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.975 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.975 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.976 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.976 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.976 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.976 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.976 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.976 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.976 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.976 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.977 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.977 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.977 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.977 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.977 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.977 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.977 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.978 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.978 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.978 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.978 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.978 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.978 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.978 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.979 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.979 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.979 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.979 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.tb_cache_size          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.979 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.979 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.979 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.980 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.980 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.980 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.980 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.volume_enforce_multipath = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.980 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.980 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.980 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.980 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.981 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.981 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.981 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.981 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.981 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.981 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.981 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.981 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.982 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.982 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.982 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.982 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.982 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.982 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.982 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.982 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.983 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.983 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.983 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.983 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.983 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.983 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.983 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.983 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.984 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.retriable_status_codes = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.984 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.984 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.984 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.984 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.984 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.984 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.984 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.985 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.985 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.985 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.985 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.985 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] notifications.include_share_mapping = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.985 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] notifications.notification_format = both log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.985 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] notifications.notify_on_state_change = vm_and_task_state log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.985 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.986 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.986 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.986 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.986 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.986 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.986 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.986 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.986 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.987 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.987 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.987 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.987 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.987 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.987 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.987 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.988 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.988 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.988 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.988 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.988 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.988 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.988 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.988 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.989 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.989 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.989 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.989 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.retriable_status_codes = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.989 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.989 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.989 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.989 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.990 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.990 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.990 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.990 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.990 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.990 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.990 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.991 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.991 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.991 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.991 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.991 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.991 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.991 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.992 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.992 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.992 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.992 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.992 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.992 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.993 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.993 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.993 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.993 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] quota.unified_limits_resource_list = ['servers'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.993 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] quota.unified_limits_resource_strategy = require log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.993 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.994 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.994 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.994 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.994 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.994 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.994 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.994 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.994 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.995 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.995 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.995 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.995 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.995 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.995 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.996 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.996 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.996 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.996 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.996 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.hypervisor_version_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.996 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.996 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.image_props_weight_multiplier = 0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.996 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.image_props_weight_setting = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.997 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.997 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.997 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.997 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.997 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.997 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.num_instances_weight_multiplier = 0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.997 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.997 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.998 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.998 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.998 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.998 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.998 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.998 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.998 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.999 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.999 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.999 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.999 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:45 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.999 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:45.999 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.000 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.000 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.000 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.000 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.000 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.000 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.001 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.001 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.001 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.001 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.001 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.001 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.001 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.001 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.002 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.002 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.002 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.002 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.002 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.002 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.003 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.003 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.003 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] spice.require_secure           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.003 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.003 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.003 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] spice.spice_direct_proxy_base_url = http://127.0.0.1:13002/nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.003 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.004 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.004 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.004 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.004 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.004 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.004 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.004 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.005 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.005 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.005 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.005 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.005 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.005 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.005 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.006 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.006 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.006 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.006 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.006 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.006 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.006 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.007 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.007 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.007 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.007 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.007 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.007 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.007 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.008 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.008 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.008 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.008 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.008 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.008 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.008 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.009 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.009 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.009 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.009 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.009 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.009 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.010 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.010 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.010 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.010 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.010 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.011 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.011 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.011 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.011 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.011 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.011 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.011 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.012 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.012 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.012 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.012 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.012 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.012 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.012 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.012 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.013 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.013 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.013 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.013 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.013 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.013 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.013 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.013 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.014 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.014 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.014 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.014 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.014 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.014 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.014 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.014 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.015 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.015 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.015 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.015 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.015 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.015 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.015 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.015 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.016 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.016 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.016 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.hostname = compute-0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.016 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.016 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.016 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.016 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.016 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_splay = 0.0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.017 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.processname = nova-compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.017 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.017 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.017 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.017 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.017 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.017 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.017 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.018 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.018 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.018 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.018 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.rabbit_stream_fanout = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.018 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.018 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.rabbit_transient_quorum_queue = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.018 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.018 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.019 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.019 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.019 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.019 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.019 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.019 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_rabbit.use_queue_manager = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.020 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_notifications.driver = ['messagingv2'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.020 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.020 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.020 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.020 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.020 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.021 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.021 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.021 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.021 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.021 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.021 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.022 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.022 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.022 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.022 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.022 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.022 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.endpoint_interface  = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.022 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.023 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.endpoint_region_name = regionOne log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.023 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.endpoint_service_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.023 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.endpoint_service_type = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.023 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.023 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.023 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.024 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.024 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.024 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.024 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.024 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.024 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.025 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.025 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.retriable_status_codes = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.025 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.025 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.025 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.025 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.026 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.026 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.026 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.026 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.026 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.026 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.027 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.027 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.027 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.027 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.027 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.028 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.028 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.028 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.028 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.028 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.028 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vif_plug_linux_bridge_privileged.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.029 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.029 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.029 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.029 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.029 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.029 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.030 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vif_plug_ovs_privileged.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.030 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.030 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.030 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.030 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.030 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.031 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.031 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.031 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.031 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.031 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.031 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.031 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] os_vif_ovs.default_qos_type    = linux-noop log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.032 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.032 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.032 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.032 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.032 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.032 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.033 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] privsep_osbrick.capabilities   = [21, 2] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.033 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.033 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.033 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] privsep_osbrick.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.033 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.033 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.034 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.034 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.034 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.034 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.034 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] nova_sys_admin.log_daemon_traceback = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.034 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.035 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.035 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.035 2 DEBUG oslo_service.backend._eventlet.service [None req-3cfd35f8-c98f-4273-aab9-731b969b24f1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.037 2 INFO nova.service [-] Starting compute node (version 32.1.0-0.20251008114656.23cad1d.el10)
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.549 2 DEBUG nova.virt.libvirt.host [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:498
Oct 08 16:03:46 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Oct 08 16:03:46 compute-0 systemd[1]: Started libvirt QEMU daemon.
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.642 2 DEBUG nova.virt.libvirt.host [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fea8c652840> _get_new_connection /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:504
Oct 08 16:03:46 compute-0 nova_compute[117413]: libvirt:  error : internal error: could not initialize domain event timer
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.645 2 WARNING nova.virt.libvirt.host [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] URI qemu:///system does not support events: internal error: could not initialize domain event timer: libvirt.libvirtError: internal error: could not initialize domain event timer
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.645 2 DEBUG nova.virt.libvirt.host [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fea8c652840> _get_new_connection /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:525
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.647 2 DEBUG nova.virt.libvirt.host [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Starting native event thread _init_events /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:484
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.648 2 DEBUG nova.virt.libvirt.host [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:490
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.648 2 INFO nova.utils [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] The default thread pool MainProcess.default is initialized
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.649 2 DEBUG nova.virt.libvirt.host [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Starting connection event dispatch thread _init_events /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:493
Oct 08 16:03:46 compute-0 nova_compute[117413]: 2025-10-08 16:03:46.649 2 INFO nova.virt.libvirt.driver [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Connection event '1' reason 'None'
Oct 08 16:03:47 compute-0 nova_compute[117413]: 2025-10-08 16:03:47.155 2 WARNING nova.virt.libvirt.driver [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 08 16:03:47 compute-0 nova_compute[117413]: 2025-10-08 16:03:47.157 2 DEBUG nova.virt.libvirt.volume.mount [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.12/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 08 16:03:47 compute-0 nova_compute[117413]: 2025-10-08 16:03:47.593 2 INFO nova.virt.libvirt.host [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Libvirt host capabilities <capabilities>
Oct 08 16:03:47 compute-0 nova_compute[117413]: 
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <host>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <uuid>7ed7365c-752d-466f-920a-97a2ec0fb2e1</uuid>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <cpu>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <arch>x86_64</arch>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model>EPYC-Rome-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <vendor>AMD</vendor>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <microcode version='16777317'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <signature family='23' model='49' stepping='0'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='x2apic'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='tsc-deadline'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='osxsave'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='hypervisor'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='tsc_adjust'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='spec-ctrl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='stibp'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='arch-capabilities'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='ssbd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='cmp_legacy'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='topoext'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='virt-ssbd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='lbrv'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='tsc-scale'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='vmcb-clean'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='pause-filter'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='pfthreshold'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='svme-addr-chk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='rdctl-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='skip-l1dfl-vmentry'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='mds-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature name='pschange-mc-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <pages unit='KiB' size='4'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <pages unit='KiB' size='2048'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <pages unit='KiB' size='1048576'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </cpu>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <power_management>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <suspend_mem/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <suspend_disk/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <suspend_hybrid/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </power_management>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <iommu support='no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <migration_features>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <live/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <uri_transports>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <uri_transport>tcp</uri_transport>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <uri_transport>rdma</uri_transport>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </uri_transports>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </migration_features>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <topology>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <cells num='1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <cell id='0'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:           <memory unit='KiB'>7864096</memory>
Oct 08 16:03:47 compute-0 nova_compute[117413]:           <pages unit='KiB' size='4'>1966024</pages>
Oct 08 16:03:47 compute-0 nova_compute[117413]:           <pages unit='KiB' size='2048'>0</pages>
Oct 08 16:03:47 compute-0 nova_compute[117413]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 08 16:03:47 compute-0 nova_compute[117413]:           <distances>
Oct 08 16:03:47 compute-0 nova_compute[117413]:             <sibling id='0' value='10'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:           </distances>
Oct 08 16:03:47 compute-0 nova_compute[117413]:           <cpus num='8'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:           </cpus>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         </cell>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </cells>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </topology>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <cache>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </cache>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <secmodel>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model>selinux</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <doi>0</doi>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </secmodel>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <secmodel>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model>dac</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <doi>0</doi>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </secmodel>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </host>
Oct 08 16:03:47 compute-0 nova_compute[117413]: 
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <guest>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <os_type>hvm</os_type>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <arch name='i686'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <wordsize>32</wordsize>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <domain type='qemu'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <domain type='kvm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </arch>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <features>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <pae/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <nonpae/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <acpi default='on' toggle='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <apic default='on' toggle='no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <cpuselection/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <deviceboot/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <disksnapshot default='on' toggle='no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <externalSnapshot/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </features>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </guest>
Oct 08 16:03:47 compute-0 nova_compute[117413]: 
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <guest>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <os_type>hvm</os_type>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <arch name='x86_64'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <wordsize>64</wordsize>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <domain type='qemu'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <domain type='kvm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </arch>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <features>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <acpi default='on' toggle='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <apic default='on' toggle='no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <cpuselection/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <deviceboot/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <disksnapshot default='on' toggle='no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <externalSnapshot/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </features>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </guest>
Oct 08 16:03:47 compute-0 nova_compute[117413]: 
Oct 08 16:03:47 compute-0 nova_compute[117413]: </capabilities>
Oct 08 16:03:47 compute-0 nova_compute[117413]: 
Oct 08 16:03:47 compute-0 nova_compute[117413]: 2025-10-08 16:03:47.600 2 DEBUG nova.virt.libvirt.host [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:944
Oct 08 16:03:47 compute-0 nova_compute[117413]: 2025-10-08 16:03:47.622 2 DEBUG nova.virt.libvirt.host [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 08 16:03:47 compute-0 nova_compute[117413]: <domainCapabilities>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <path>/usr/libexec/qemu-kvm</path>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <domain>kvm</domain>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <arch>i686</arch>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <vcpu max='240'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <iothreads supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <os supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <enum name='firmware'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <loader supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='type'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>rom</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>pflash</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='readonly'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>yes</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>no</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='secure'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>no</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </loader>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </os>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <cpu>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <mode name='host-passthrough' supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='hostPassthroughMigratable'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>on</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>off</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </mode>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <mode name='maximum' supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='maximumMigratable'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>on</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>off</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </mode>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <mode name='host-model' supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <vendor>AMD</vendor>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='x2apic'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='tsc-deadline'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='hypervisor'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='tsc_adjust'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='spec-ctrl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='stibp'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='arch-capabilities'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='ssbd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='cmp_legacy'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='overflow-recov'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='succor'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='ibrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='amd-ssbd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='virt-ssbd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='lbrv'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='tsc-scale'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='vmcb-clean'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='flushbyasid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='pause-filter'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='pfthreshold'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='svme-addr-chk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='rdctl-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='mds-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='pschange-mc-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='gds-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='rfds-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='disable' name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </mode>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <mode name='custom' supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-noTSX'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-v5'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cooperlake'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cooperlake-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cooperlake-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Denverton'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mpx'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Denverton-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mpx'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Denverton-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Denverton-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Dhyana-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Genoa'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amd-psfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='auto-ibrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='no-nested-data-bp'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='null-sel-clr-base'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='stibp-always-on'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Genoa-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amd-psfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='auto-ibrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='no-nested-data-bp'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='null-sel-clr-base'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='stibp-always-on'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Milan'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Milan-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Milan-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amd-psfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='no-nested-data-bp'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='null-sel-clr-base'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='stibp-always-on'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Rome'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Rome-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Rome-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Rome-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='GraniteRapids'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mcdt-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pbrsb-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='prefetchiti'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='GraniteRapids-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mcdt-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pbrsb-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='prefetchiti'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='GraniteRapids-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx10'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx10-128'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx10-256'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx10-512'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mcdt-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pbrsb-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='prefetchiti'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-noTSX'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-noTSX'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v5'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v6'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v7'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='IvyBridge'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='IvyBridge-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='IvyBridge-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='IvyBridge-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='KnightsMill'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-4fmaps'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-4vnniw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512er'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512pf'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='KnightsMill-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-4fmaps'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-4vnniw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512er'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512pf'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Opteron_G4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fma4'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xop'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Opteron_G4-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fma4'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xop'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Opteron_G5'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fma4'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tbm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xop'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Opteron_G5-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fma4'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tbm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xop'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SapphireRapids'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SapphireRapids-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SapphireRapids-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SapphireRapids-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SierraForest'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-ne-convert'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cmpccxadd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mcdt-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pbrsb-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SierraForest-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-ne-convert'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cmpccxadd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mcdt-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pbrsb-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-v5'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Snowridge'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='core-capability'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mpx'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='split-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Snowridge-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='core-capability'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mpx'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='split-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Snowridge-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='core-capability'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='split-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Snowridge-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='core-capability'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='split-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Snowridge-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='athlon'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnow'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnowext'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='athlon-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnow'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnowext'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='core2duo'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='core2duo-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='coreduo'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='coreduo-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='n270'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='n270-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='phenom'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnow'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnowext'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='phenom-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnow'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnowext'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </mode>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <memoryBacking supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <enum name='sourceType'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <value>file</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <value>anonymous</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <value>memfd</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </memoryBacking>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <disk supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='diskDevice'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>disk</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>cdrom</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>floppy</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>lun</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='bus'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>ide</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>fdc</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>scsi</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>usb</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>sata</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='model'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio-transitional</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio-non-transitional</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <graphics supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='type'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>vnc</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>egl-headless</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>dbus</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </graphics>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <video supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='modelType'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>vga</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>cirrus</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>none</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>bochs</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>ramfb</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </video>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <hostdev supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='mode'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>subsystem</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='startupPolicy'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>default</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>mandatory</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>requisite</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>optional</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='subsysType'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>usb</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>pci</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>scsi</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='capsType'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='pciBackend'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </hostdev>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <rng supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='model'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio-transitional</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio-non-transitional</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='backendModel'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>random</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>egd</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>builtin</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <filesystem supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='driverType'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>path</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>handle</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtiofs</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </filesystem>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <tpm supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='model'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>tpm-tis</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>tpm-crb</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='backendModel'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>emulator</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>external</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='backendVersion'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>2.0</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </tpm>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <redirdev supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='bus'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>usb</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </redirdev>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <channel supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='type'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>pty</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>unix</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </channel>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <crypto supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='model'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='type'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>qemu</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='backendModel'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>builtin</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </crypto>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <interface supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='backendType'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>default</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>passt</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </interface>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <panic supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='model'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>isa</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>hyperv</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </panic>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <features>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <gic supported='no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <vmcoreinfo supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <genid supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <backingStoreInput supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <backup supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <async-teardown supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <ps2 supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <sev supported='no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <sgx supported='no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <hyperv supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='features'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>relaxed</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>vapic</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>spinlocks</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>vpindex</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>runtime</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>synic</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>stimer</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>reset</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>vendor_id</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>frequencies</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>reenlightenment</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>tlbflush</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>ipi</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>avic</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>emsr_bitmap</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>xmm_input</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </hyperv>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <launchSecurity supported='no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </features>
Oct 08 16:03:47 compute-0 nova_compute[117413]: </domainCapabilities>
Oct 08 16:03:47 compute-0 nova_compute[117413]:  _get_domain_capabilities /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1029
Oct 08 16:03:47 compute-0 nova_compute[117413]: 2025-10-08 16:03:47.629 2 DEBUG nova.virt.libvirt.host [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 08 16:03:47 compute-0 nova_compute[117413]: <domainCapabilities>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <path>/usr/libexec/qemu-kvm</path>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <domain>kvm</domain>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <arch>i686</arch>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <vcpu max='4096'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <iothreads supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <os supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <enum name='firmware'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <loader supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='type'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>rom</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>pflash</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='readonly'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>yes</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>no</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='secure'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>no</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </loader>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </os>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <cpu>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <mode name='host-passthrough' supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='hostPassthroughMigratable'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>on</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>off</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </mode>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <mode name='maximum' supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='maximumMigratable'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>on</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>off</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </mode>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <mode name='host-model' supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <vendor>AMD</vendor>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='x2apic'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='tsc-deadline'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='hypervisor'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='tsc_adjust'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='spec-ctrl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='stibp'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='arch-capabilities'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='ssbd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='cmp_legacy'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='overflow-recov'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='succor'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='ibrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='amd-ssbd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='virt-ssbd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='lbrv'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='tsc-scale'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='vmcb-clean'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='flushbyasid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='pause-filter'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='pfthreshold'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='svme-addr-chk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='rdctl-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='mds-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='pschange-mc-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='gds-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='rfds-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='disable' name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </mode>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <mode name='custom' supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-noTSX'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-v5'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cooperlake'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cooperlake-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cooperlake-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Denverton'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mpx'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Denverton-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mpx'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Denverton-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Denverton-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Dhyana-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Genoa'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amd-psfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='auto-ibrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='no-nested-data-bp'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='null-sel-clr-base'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='stibp-always-on'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Genoa-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amd-psfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='auto-ibrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='no-nested-data-bp'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='null-sel-clr-base'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='stibp-always-on'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Milan'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Milan-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Milan-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amd-psfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='no-nested-data-bp'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='null-sel-clr-base'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='stibp-always-on'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Rome'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Rome-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Rome-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Rome-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='GraniteRapids'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mcdt-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pbrsb-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='prefetchiti'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='GraniteRapids-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mcdt-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pbrsb-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='prefetchiti'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='GraniteRapids-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx10'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx10-128'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx10-256'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx10-512'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mcdt-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pbrsb-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='prefetchiti'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-noTSX'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-noTSX'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v5'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v6'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v7'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='IvyBridge'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='IvyBridge-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='IvyBridge-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='IvyBridge-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='KnightsMill'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-4fmaps'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-4vnniw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512er'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512pf'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='KnightsMill-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-4fmaps'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-4vnniw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512er'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512pf'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Opteron_G4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fma4'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xop'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Opteron_G4-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fma4'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xop'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Opteron_G5'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fma4'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tbm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xop'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Opteron_G5-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fma4'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tbm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xop'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SapphireRapids'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SapphireRapids-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SapphireRapids-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SapphireRapids-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SierraForest'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-ne-convert'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cmpccxadd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mcdt-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pbrsb-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SierraForest-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-ne-convert'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cmpccxadd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mcdt-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pbrsb-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-v5'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Snowridge'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='core-capability'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mpx'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='split-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Snowridge-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='core-capability'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mpx'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='split-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Snowridge-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='core-capability'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='split-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Snowridge-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='core-capability'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='split-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Snowridge-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='athlon'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnow'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnowext'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='athlon-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnow'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnowext'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='core2duo'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='core2duo-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='coreduo'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='coreduo-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='n270'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='n270-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='phenom'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnow'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnowext'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='phenom-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnow'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnowext'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </mode>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <memoryBacking supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <enum name='sourceType'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <value>file</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <value>anonymous</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <value>memfd</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </memoryBacking>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <disk supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='diskDevice'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>disk</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>cdrom</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>floppy</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>lun</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='bus'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>fdc</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>scsi</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>usb</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>sata</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='model'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio-transitional</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio-non-transitional</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <graphics supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='type'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>vnc</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>egl-headless</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>dbus</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </graphics>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <video supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='modelType'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>vga</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>cirrus</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>none</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>bochs</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>ramfb</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </video>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <hostdev supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='mode'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>subsystem</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='startupPolicy'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>default</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>mandatory</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>requisite</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>optional</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='subsysType'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>usb</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>pci</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>scsi</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='capsType'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='pciBackend'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </hostdev>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <rng supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='model'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio-transitional</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio-non-transitional</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='backendModel'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>random</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>egd</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>builtin</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <filesystem supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='driverType'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>path</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>handle</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtiofs</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </filesystem>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <tpm supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='model'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>tpm-tis</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>tpm-crb</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='backendModel'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>emulator</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>external</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='backendVersion'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>2.0</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </tpm>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <redirdev supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='bus'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>usb</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </redirdev>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <channel supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='type'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>pty</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>unix</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </channel>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <crypto supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='model'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='type'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>qemu</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='backendModel'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>builtin</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </crypto>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <interface supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='backendType'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>default</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>passt</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </interface>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <panic supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='model'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>isa</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>hyperv</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </panic>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <features>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <gic supported='no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <vmcoreinfo supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <genid supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <backingStoreInput supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <backup supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <async-teardown supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <ps2 supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <sev supported='no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <sgx supported='no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <hyperv supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='features'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>relaxed</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>vapic</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>spinlocks</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>vpindex</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>runtime</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>synic</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>stimer</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>reset</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>vendor_id</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>frequencies</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>reenlightenment</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>tlbflush</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>ipi</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>avic</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>emsr_bitmap</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>xmm_input</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </hyperv>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <launchSecurity supported='no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </features>
Oct 08 16:03:47 compute-0 nova_compute[117413]: </domainCapabilities>
Oct 08 16:03:47 compute-0 nova_compute[117413]:  _get_domain_capabilities /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1029
Oct 08 16:03:47 compute-0 nova_compute[117413]: 2025-10-08 16:03:47.662 2 DEBUG nova.virt.libvirt.host [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:944
Oct 08 16:03:47 compute-0 nova_compute[117413]: 2025-10-08 16:03:47.666 2 DEBUG nova.virt.libvirt.host [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 08 16:03:47 compute-0 nova_compute[117413]: <domainCapabilities>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <path>/usr/libexec/qemu-kvm</path>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <domain>kvm</domain>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <arch>x86_64</arch>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <vcpu max='4096'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <iothreads supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <os supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <enum name='firmware'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <value>efi</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <loader supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='type'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>rom</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>pflash</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='readonly'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>yes</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>no</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='secure'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>yes</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>no</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </loader>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </os>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <cpu>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <mode name='host-passthrough' supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='hostPassthroughMigratable'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>on</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>off</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </mode>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <mode name='maximum' supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='maximumMigratable'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>on</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>off</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </mode>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <mode name='host-model' supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <vendor>AMD</vendor>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='x2apic'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='tsc-deadline'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='hypervisor'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='tsc_adjust'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='spec-ctrl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='stibp'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='arch-capabilities'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='ssbd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='cmp_legacy'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='overflow-recov'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='succor'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='ibrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='amd-ssbd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='virt-ssbd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='lbrv'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='tsc-scale'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='vmcb-clean'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='flushbyasid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='pause-filter'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='pfthreshold'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='svme-addr-chk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='rdctl-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='mds-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='pschange-mc-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='gds-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='rfds-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='disable' name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </mode>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <mode name='custom' supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-noTSX'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-v5'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cooperlake'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cooperlake-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cooperlake-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Denverton'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mpx'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Denverton-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mpx'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Denverton-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Denverton-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Dhyana-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Genoa'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amd-psfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='auto-ibrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='no-nested-data-bp'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='null-sel-clr-base'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='stibp-always-on'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Genoa-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amd-psfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='auto-ibrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='no-nested-data-bp'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='null-sel-clr-base'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='stibp-always-on'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Milan'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Milan-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Milan-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amd-psfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='no-nested-data-bp'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='null-sel-clr-base'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='stibp-always-on'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Rome'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Rome-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Rome-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Rome-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='GraniteRapids'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mcdt-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pbrsb-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='prefetchiti'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='GraniteRapids-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mcdt-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pbrsb-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='prefetchiti'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='GraniteRapids-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx10'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx10-128'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx10-256'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx10-512'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mcdt-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pbrsb-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='prefetchiti'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-noTSX'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-noTSX'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v5'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v6'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v7'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='IvyBridge'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='IvyBridge-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='IvyBridge-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='IvyBridge-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='KnightsMill'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-4fmaps'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-4vnniw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512er'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512pf'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='KnightsMill-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-4fmaps'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-4vnniw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512er'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512pf'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Opteron_G4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fma4'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xop'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Opteron_G4-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fma4'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xop'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Opteron_G5'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fma4'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tbm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xop'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Opteron_G5-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fma4'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tbm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xop'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SapphireRapids'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SapphireRapids-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SapphireRapids-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SapphireRapids-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SierraForest'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-ne-convert'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cmpccxadd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mcdt-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pbrsb-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SierraForest-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-ne-convert'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cmpccxadd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mcdt-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pbrsb-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-v5'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Snowridge'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='core-capability'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mpx'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='split-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Snowridge-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='core-capability'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mpx'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='split-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Snowridge-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='core-capability'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='split-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Snowridge-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='core-capability'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='split-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Snowridge-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='athlon'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnow'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnowext'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='athlon-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnow'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnowext'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='core2duo'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='core2duo-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='coreduo'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='coreduo-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='n270'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='n270-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='phenom'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnow'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnowext'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='phenom-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnow'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnowext'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </mode>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <memoryBacking supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <enum name='sourceType'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <value>file</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <value>anonymous</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <value>memfd</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </memoryBacking>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <disk supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='diskDevice'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>disk</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>cdrom</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>floppy</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>lun</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='bus'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>fdc</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>scsi</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>usb</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>sata</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='model'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio-transitional</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio-non-transitional</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <graphics supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='type'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>vnc</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>egl-headless</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>dbus</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </graphics>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <video supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='modelType'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>vga</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>cirrus</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>none</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>bochs</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>ramfb</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </video>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <hostdev supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='mode'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>subsystem</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='startupPolicy'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>default</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>mandatory</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>requisite</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>optional</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='subsysType'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>usb</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>pci</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>scsi</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='capsType'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='pciBackend'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </hostdev>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <rng supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='model'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio-transitional</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio-non-transitional</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='backendModel'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>random</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>egd</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>builtin</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <filesystem supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='driverType'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>path</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>handle</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtiofs</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </filesystem>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <tpm supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='model'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>tpm-tis</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>tpm-crb</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='backendModel'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>emulator</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>external</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='backendVersion'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>2.0</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </tpm>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <redirdev supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='bus'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>usb</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </redirdev>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <channel supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='type'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>pty</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>unix</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </channel>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <crypto supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='model'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='type'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>qemu</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='backendModel'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>builtin</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </crypto>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <interface supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='backendType'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>default</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>passt</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </interface>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <panic supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='model'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>isa</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>hyperv</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </panic>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <features>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <gic supported='no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <vmcoreinfo supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <genid supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <backingStoreInput supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <backup supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <async-teardown supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <ps2 supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <sev supported='no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <sgx supported='no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <hyperv supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='features'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>relaxed</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>vapic</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>spinlocks</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>vpindex</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>runtime</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>synic</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>stimer</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>reset</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>vendor_id</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>frequencies</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>reenlightenment</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>tlbflush</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>ipi</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>avic</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>emsr_bitmap</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>xmm_input</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </hyperv>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <launchSecurity supported='no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </features>
Oct 08 16:03:47 compute-0 nova_compute[117413]: </domainCapabilities>
Oct 08 16:03:47 compute-0 nova_compute[117413]:  _get_domain_capabilities /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1029
Oct 08 16:03:47 compute-0 nova_compute[117413]: 2025-10-08 16:03:47.735 2 DEBUG nova.virt.libvirt.host [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 08 16:03:47 compute-0 nova_compute[117413]: <domainCapabilities>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <path>/usr/libexec/qemu-kvm</path>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <domain>kvm</domain>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <arch>x86_64</arch>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <vcpu max='240'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <iothreads supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <os supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <enum name='firmware'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <loader supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='type'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>rom</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>pflash</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='readonly'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>yes</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>no</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='secure'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>no</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </loader>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </os>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <cpu>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <mode name='host-passthrough' supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='hostPassthroughMigratable'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>on</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>off</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </mode>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <mode name='maximum' supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='maximumMigratable'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>on</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>off</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </mode>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <mode name='host-model' supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <vendor>AMD</vendor>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='x2apic'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='tsc-deadline'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='hypervisor'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='tsc_adjust'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='spec-ctrl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='stibp'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='arch-capabilities'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='ssbd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='cmp_legacy'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='overflow-recov'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='succor'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='ibrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='amd-ssbd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='virt-ssbd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='lbrv'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='tsc-scale'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='vmcb-clean'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='flushbyasid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='pause-filter'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='pfthreshold'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='svme-addr-chk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='rdctl-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='mds-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='pschange-mc-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='gds-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='require' name='rfds-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <feature policy='disable' name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </mode>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <mode name='custom' supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-noTSX'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Broadwell-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cascadelake-Server-v5'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cooperlake'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cooperlake-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Cooperlake-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Denverton'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mpx'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Denverton-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mpx'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Denverton-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Denverton-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Dhyana-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Genoa'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amd-psfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='auto-ibrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='no-nested-data-bp'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='null-sel-clr-base'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='stibp-always-on'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Genoa-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amd-psfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='auto-ibrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='no-nested-data-bp'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='null-sel-clr-base'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='stibp-always-on'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Milan'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Milan-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Milan-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amd-psfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='no-nested-data-bp'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='null-sel-clr-base'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='stibp-always-on'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Rome'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Rome-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Rome-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-Rome-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='EPYC-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='GraniteRapids'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mcdt-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pbrsb-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='prefetchiti'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='GraniteRapids-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mcdt-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pbrsb-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='prefetchiti'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='GraniteRapids-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx10'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx10-128'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx10-256'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx10-512'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mcdt-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pbrsb-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='prefetchiti'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-noTSX'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Haswell-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-noTSX'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v5'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v6'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Icelake-Server-v7'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='IvyBridge'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='IvyBridge-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='IvyBridge-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='IvyBridge-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='KnightsMill'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-4fmaps'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-4vnniw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512er'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512pf'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='KnightsMill-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-4fmaps'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-4vnniw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512er'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512pf'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 08 16:03:47 compute-0 systemd[1]: Started libvirt nodedev daemon.
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Opteron_G4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fma4'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xop'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Opteron_G4-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fma4'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xop'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Opteron_G5'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fma4'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tbm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xop'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Opteron_G5-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fma4'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tbm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xop'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SapphireRapids'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SapphireRapids-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SapphireRapids-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SapphireRapids-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='amx-tile'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-bf16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-fp16'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512-vpopcntdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bitalg'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vbmi2'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrc'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fzrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='la57'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='taa-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='tsx-ldtrk'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xfd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SierraForest'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-ne-convert'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cmpccxadd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mcdt-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pbrsb-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='SierraForest-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-ifma'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-ne-convert'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx-vnni-int8'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='bus-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cmpccxadd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fbsdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='fsrs'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ibrs-all'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mcdt-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pbrsb-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='psdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='sbdr-ssdp-no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='serialize'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vaes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='vpclmulqdq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Client-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='hle'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='rtm'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Skylake-Server-v5'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512bw'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512cd'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512dq'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512f'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='avx512vl'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='invpcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pcid'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='pku'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Snowridge'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='core-capability'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mpx'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='split-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Snowridge-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='core-capability'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='mpx'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='split-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Snowridge-v2'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='core-capability'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='split-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Snowridge-v3'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='core-capability'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='split-lock-detect'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='Snowridge-v4'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='cldemote'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='erms'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='gfni'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdir64b'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='movdiri'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='xsaves'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='athlon'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnow'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnowext'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='athlon-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnow'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnowext'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='core2duo'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='core2duo-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='coreduo'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='coreduo-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='n270'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='n270-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='ss'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='phenom'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnow'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnowext'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <blockers model='phenom-v1'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnow'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <feature name='3dnowext'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </blockers>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </mode>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <memoryBacking supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <enum name='sourceType'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <value>file</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <value>anonymous</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <value>memfd</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </memoryBacking>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <disk supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='diskDevice'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>disk</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>cdrom</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>floppy</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>lun</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='bus'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>ide</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>fdc</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>scsi</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>usb</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>sata</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='model'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio-transitional</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio-non-transitional</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <graphics supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='type'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>vnc</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>egl-headless</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>dbus</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </graphics>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <video supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='modelType'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>vga</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>cirrus</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>none</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>bochs</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>ramfb</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </video>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <hostdev supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='mode'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>subsystem</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='startupPolicy'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>default</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>mandatory</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>requisite</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>optional</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='subsysType'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>usb</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>pci</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>scsi</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='capsType'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='pciBackend'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </hostdev>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <rng supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='model'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio-transitional</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtio-non-transitional</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='backendModel'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>random</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>egd</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>builtin</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <filesystem supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='driverType'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>path</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>handle</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>virtiofs</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </filesystem>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <tpm supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='model'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>tpm-tis</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>tpm-crb</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='backendModel'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>emulator</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>external</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='backendVersion'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>2.0</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </tpm>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <redirdev supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='bus'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>usb</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </redirdev>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <channel supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='type'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>pty</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>unix</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </channel>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <crypto supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='model'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='type'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>qemu</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='backendModel'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>builtin</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </crypto>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <interface supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='backendType'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>default</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>passt</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </interface>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <panic supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='model'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>isa</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>hyperv</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </panic>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   <features>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <gic supported='no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <vmcoreinfo supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <genid supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <backingStoreInput supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <backup supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <async-teardown supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <ps2 supported='yes'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <sev supported='no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <sgx supported='no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <hyperv supported='yes'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       <enum name='features'>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>relaxed</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>vapic</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>spinlocks</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>vpindex</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>runtime</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>synic</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>stimer</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>reset</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>vendor_id</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>frequencies</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>reenlightenment</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>tlbflush</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>ipi</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>avic</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>emsr_bitmap</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:         <value>xmm_input</value>
Oct 08 16:03:47 compute-0 nova_compute[117413]:       </enum>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     </hyperv>
Oct 08 16:03:47 compute-0 nova_compute[117413]:     <launchSecurity supported='no'/>
Oct 08 16:03:47 compute-0 nova_compute[117413]:   </features>
Oct 08 16:03:47 compute-0 nova_compute[117413]: </domainCapabilities>
Oct 08 16:03:47 compute-0 nova_compute[117413]:  _get_domain_capabilities /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1029
Oct 08 16:03:47 compute-0 nova_compute[117413]: 2025-10-08 16:03:47.796 2 DEBUG nova.virt.libvirt.host [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1877
Oct 08 16:03:47 compute-0 nova_compute[117413]: 2025-10-08 16:03:47.796 2 INFO nova.virt.libvirt.host [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Secure Boot support detected
Oct 08 16:03:47 compute-0 nova_compute[117413]: 2025-10-08 16:03:47.804 2 INFO nova.virt.libvirt.driver [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 08 16:03:47 compute-0 nova_compute[117413]: 2025-10-08 16:03:47.804 2 INFO nova.virt.libvirt.driver [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 08 16:03:47 compute-0 nova_compute[117413]: 2025-10-08 16:03:47.961 2 DEBUG nova.virt.libvirt.driver [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1177
Oct 08 16:03:48 compute-0 sshd-session[117806]: Accepted publickey for zuul from 192.168.122.30 port 40074 ssh2: ECDSA SHA256:ZIjHNHNxAuv0z7dTwV8SzPT4xe1+IFvqH/0VmHWdIl4
Oct 08 16:03:48 compute-0 nova_compute[117413]: 2025-10-08 16:03:48.479 2 INFO nova.virt.node [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Determined node identity 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 from /var/lib/nova/compute_id
Oct 08 16:03:48 compute-0 systemd-logind[847]: New session 11 of user zuul.
Oct 08 16:03:48 compute-0 systemd[1]: Started Session 11 of User zuul.
Oct 08 16:03:48 compute-0 sshd-session[117806]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 16:03:49 compute-0 nova_compute[117413]: 2025-10-08 16:03:49.277 2 WARNING nova.compute.manager [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Compute nodes ['9e0c638c-76e7-4854-b60d-5cdf0cf938b8'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 08 16:03:49 compute-0 python3.9[117959]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 08 16:03:50 compute-0 nova_compute[117413]: 2025-10-08 16:03:50.296 2 INFO nova.compute.manager [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 08 16:03:50 compute-0 sudo[118113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-virudyqofylewoofkexzkhpqwbrjrhip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939430.2741785-52-275595076897317/AnsiballZ_systemd_service.py'
Oct 08 16:03:50 compute-0 sudo[118113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:51 compute-0 python3.9[118115]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 16:03:51 compute-0 systemd[1]: Reloading.
Oct 08 16:03:51 compute-0 systemd-rc-local-generator[118145]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:03:51 compute-0 systemd-sysv-generator[118149]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:03:51 compute-0 nova_compute[117413]: 2025-10-08 16:03:51.381 2 WARNING nova.compute.manager [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Oct 08 16:03:51 compute-0 nova_compute[117413]: 2025-10-08 16:03:51.382 2 DEBUG oslo_concurrency.lockutils [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:03:51 compute-0 nova_compute[117413]: 2025-10-08 16:03:51.383 2 DEBUG oslo_concurrency.lockutils [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:03:51 compute-0 nova_compute[117413]: 2025-10-08 16:03:51.383 2 DEBUG oslo_concurrency.lockutils [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:03:51 compute-0 nova_compute[117413]: 2025-10-08 16:03:51.383 2 DEBUG nova.compute.resource_tracker [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:03:51 compute-0 sudo[118113]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:51 compute-0 nova_compute[117413]: 2025-10-08 16:03:51.537 2 WARNING nova.virt.libvirt.driver [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:03:51 compute-0 nova_compute[117413]: 2025-10-08 16:03:51.539 2 DEBUG oslo_concurrency.processutils [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:03:51 compute-0 nova_compute[117413]: 2025-10-08 16:03:51.569 2 DEBUG oslo_concurrency.processutils [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.030s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:03:51 compute-0 nova_compute[117413]: 2025-10-08 16:03:51.570 2 DEBUG nova.compute.resource_tracker [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6530MB free_disk=73.47780227661133GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:03:51 compute-0 nova_compute[117413]: 2025-10-08 16:03:51.570 2 DEBUG oslo_concurrency.lockutils [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:03:51 compute-0 nova_compute[117413]: 2025-10-08 16:03:51.570 2 DEBUG oslo_concurrency.lockutils [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:03:51 compute-0 podman[118153]: 2025-10-08 16:03:51.636098043 +0000 UTC m=+0.102902946 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 08 16:03:52 compute-0 nova_compute[117413]: 2025-10-08 16:03:52.079 2 WARNING nova.compute.resource_tracker [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] No compute node record for compute-0.ctlplane.example.com:9e0c638c-76e7-4854-b60d-5cdf0cf938b8: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 could not be found.
Oct 08 16:03:52 compute-0 python3.9[118329]: ansible-ansible.builtin.service_facts Invoked
Oct 08 16:03:52 compute-0 network[118346]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 08 16:03:52 compute-0 network[118347]: 'network-scripts' will be removed from distribution in near future.
Oct 08 16:03:52 compute-0 network[118348]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 08 16:03:52 compute-0 nova_compute[117413]: 2025-10-08 16:03:52.588 2 INFO nova.compute.resource_tracker [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8
Oct 08 16:03:54 compute-0 nova_compute[117413]: 2025-10-08 16:03:54.133 2 DEBUG nova.compute.resource_tracker [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:03:54 compute-0 nova_compute[117413]: 2025-10-08 16:03:54.134 2 DEBUG nova.compute.resource_tracker [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:03:51 up 12 min,  0 user,  load average: 1.08, 0.89, 0.49\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:03:54 compute-0 nova_compute[117413]: 2025-10-08 16:03:54.626 2 INFO nova.scheduler.client.report [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] [req-46685233-b4a9-4697-a6e6-54af582b08fd] Created resource provider record via placement API for resource provider with UUID 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 and name compute-0.ctlplane.example.com.
Oct 08 16:03:54 compute-0 nova_compute[117413]: 2025-10-08 16:03:54.657 2 DEBUG nova.virt.libvirt.host [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 08 16:03:54 compute-0 nova_compute[117413]: ] _kernel_supports_amd_sev /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1953
Oct 08 16:03:54 compute-0 nova_compute[117413]: 2025-10-08 16:03:54.658 2 INFO nova.virt.libvirt.host [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] kernel doesn't support AMD SEV
Oct 08 16:03:54 compute-0 nova_compute[117413]: 2025-10-08 16:03:54.658 2 DEBUG nova.compute.provider_tree [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Updating inventory in ProviderTree for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 08 16:03:54 compute-0 nova_compute[117413]: 2025-10-08 16:03:54.659 2 DEBUG nova.virt.libvirt.driver [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 08 16:03:55 compute-0 nova_compute[117413]: 2025-10-08 16:03:55.202 2 DEBUG nova.scheduler.client.report [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Updated inventory for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:975
Oct 08 16:03:55 compute-0 nova_compute[117413]: 2025-10-08 16:03:55.202 2 DEBUG nova.compute.provider_tree [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Updating resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Oct 08 16:03:55 compute-0 nova_compute[117413]: 2025-10-08 16:03:55.203 2 DEBUG nova.compute.provider_tree [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Updating inventory in ProviderTree for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 08 16:03:55 compute-0 nova_compute[117413]: 2025-10-08 16:03:55.377 2 DEBUG nova.compute.provider_tree [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Updating resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Oct 08 16:03:55 compute-0 nova_compute[117413]: 2025-10-08 16:03:55.892 2 DEBUG nova.compute.resource_tracker [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:03:55 compute-0 nova_compute[117413]: 2025-10-08 16:03:55.893 2 DEBUG oslo_concurrency.lockutils [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.323s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:03:55 compute-0 nova_compute[117413]: 2025-10-08 16:03:55.894 2 DEBUG nova.service [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Creating RPC server for service compute start /usr/lib/python3.12/site-packages/nova/service.py:177
Oct 08 16:03:56 compute-0 nova_compute[117413]: 2025-10-08 16:03:56.085 2 DEBUG nova.service [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.12/site-packages/nova/service.py:194
Oct 08 16:03:56 compute-0 nova_compute[117413]: 2025-10-08 16:03:56.086 2 DEBUG nova.servicegroup.drivers.db [None req-18e72737-c89e-4042-a28c-e0e5e1b6d19d - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.12/site-packages/nova/servicegroup/drivers/db.py:44
Oct 08 16:03:58 compute-0 sudo[118623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drblkruobhhfrcqjhueuebcgjtjbncjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939437.5417812-90-281139181136846/AnsiballZ_systemd_service.py'
Oct 08 16:03:58 compute-0 sudo[118623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:58 compute-0 python3.9[118625]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:03:58 compute-0 sudo[118623]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:59 compute-0 sudo[118776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzisgbdkvgrrprkdejibsdfstjlbfbsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939438.6423926-110-201507055010061/AnsiballZ_file.py'
Oct 08 16:03:59 compute-0 sudo[118776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:03:59 compute-0 python3.9[118778]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:03:59 compute-0 sudo[118776]: pam_unix(sudo:session): session closed for user root
Oct 08 16:03:59 compute-0 sudo[118928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eulfupkfbyplbtiyecnrmhunvmjpswde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939439.5346217-126-138425296557269/AnsiballZ_file.py'
Oct 08 16:03:59 compute-0 sudo[118928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:00 compute-0 python3.9[118930]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:00 compute-0 sudo[118928]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:00 compute-0 sudo[119080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkzdvvcxoaxjlbwexmoqztpaippvpfam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939440.4506588-144-45369442600275/AnsiballZ_command.py'
Oct 08 16:04:00 compute-0 sudo[119080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:01 compute-0 python3.9[119082]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 16:04:01 compute-0 sudo[119080]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:02 compute-0 python3.9[119234]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 08 16:04:02 compute-0 podman[119310]: 2025-10-08 16:04:02.479980345 +0000 UTC m=+0.081538590 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251007)
Oct 08 16:04:02 compute-0 rsyslogd[1296]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 16:04:02 compute-0 rsyslogd[1296]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 16:04:02 compute-0 sudo[119405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itryopduhikecqqfmxozuyxxjzfpwwqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939442.3021684-180-165792647022188/AnsiballZ_systemd_service.py'
Oct 08 16:04:02 compute-0 sudo[119405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:02 compute-0 python3.9[119407]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 16:04:03 compute-0 systemd[1]: Reloading.
Oct 08 16:04:03 compute-0 systemd-rc-local-generator[119437]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:04:03 compute-0 systemd-sysv-generator[119441]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:04:03 compute-0 sudo[119405]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:03 compute-0 sudo[119593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-forcxlvwvvylyumiihqapfjajeqocfci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939443.500986-196-188147914465321/AnsiballZ_command.py'
Oct 08 16:04:03 compute-0 sudo[119593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:04 compute-0 python3.9[119595]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 16:04:04 compute-0 sudo[119593]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:04 compute-0 sudo[119746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxmomcczfraykbrroqnxurskuztychbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939444.3618605-214-231269250762305/AnsiballZ_file.py'
Oct 08 16:04:04 compute-0 sudo[119746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:04 compute-0 python3.9[119748]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:04:04 compute-0 sudo[119746]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:05 compute-0 python3.9[119898]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:04:05 compute-0 auditd[782]: Audit daemon rotating log files
Oct 08 16:04:06 compute-0 python3.9[120050]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:04:07 compute-0 python3.9[120171]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759939446.0012589-246-21454439775338/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:04:07 compute-0 sudo[120321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgcnhgyhmedfcrpfrkspjfnevjbaukxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939447.4213355-276-36667855479666/AnsiballZ_group.py'
Oct 08 16:04:07 compute-0 sudo[120321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:08 compute-0 python3.9[120323]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Oct 08 16:04:08 compute-0 sudo[120321]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:08 compute-0 sudo[120473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdofvtkflfxadvrpryupyemppjwnvwcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939448.448828-298-63267284588674/AnsiballZ_getent.py'
Oct 08 16:04:08 compute-0 sudo[120473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:09 compute-0 python3.9[120475]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Oct 08 16:04:09 compute-0 sudo[120473]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:09 compute-0 sudo[120626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khigsyeqzcxvzvimamnmpgjpudbtxnxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939449.3800066-314-194381252697728/AnsiballZ_group.py'
Oct 08 16:04:09 compute-0 sudo[120626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:09 compute-0 python3.9[120628]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 08 16:04:09 compute-0 groupadd[120629]: group added to /etc/group: name=ceilometer, GID=42405
Oct 08 16:04:09 compute-0 groupadd[120629]: group added to /etc/gshadow: name=ceilometer
Oct 08 16:04:09 compute-0 groupadd[120629]: new group: name=ceilometer, GID=42405
Oct 08 16:04:09 compute-0 sudo[120626]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:10 compute-0 nova_compute[117413]: 2025-10-08 16:04:10.089 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:04:10 compute-0 nova_compute[117413]: 2025-10-08 16:04:10.601 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:04:10 compute-0 sudo[120795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvxwhtqpvzitqavfabzrsyvtkwwjphta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939450.1694717-330-56744697631026/AnsiballZ_user.py'
Oct 08 16:04:10 compute-0 sudo[120795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:10 compute-0 podman[120758]: 2025-10-08 16:04:10.815759197 +0000 UTC m=+0.093750942 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:04:11 compute-0 python3.9[120801]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 08 16:04:11 compute-0 useradd[120808]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Oct 08 16:04:11 compute-0 useradd[120808]: add 'ceilometer' to group 'libvirt'
Oct 08 16:04:11 compute-0 useradd[120808]: add 'ceilometer' to shadow group 'libvirt'
Oct 08 16:04:11 compute-0 sudo[120795]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:12 compute-0 python3.9[120964]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:04:12 compute-0 python3.9[121085]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759939451.9301877-382-256681281197154/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:13 compute-0 python3.9[121235]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:04:14 compute-0 python3.9[121356]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759939453.1038883-382-192745459759971/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:14 compute-0 python3.9[121506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:04:15 compute-0 python3.9[121627]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1759939454.3582513-382-53153063077079/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:15 compute-0 podman[121751]: 2025-10-08 16:04:15.967203561 +0000 UTC m=+0.100352652 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251007)
Oct 08 16:04:16 compute-0 python3.9[121787]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:04:16 compute-0 python3.9[121945]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:04:17 compute-0 python3.9[122097]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:04:18 compute-0 python3.9[122218]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939457.2717783-500-173660195153232/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:19 compute-0 python3.9[122368]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:04:19 compute-0 python3.9[122444]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:20 compute-0 python3.9[122594]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:04:21 compute-0 python3.9[122715]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939459.8461711-500-77443140446196/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=dc18ad125a85ddea0c4017155c3d63e40805291b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:21 compute-0 python3.9[122865]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:04:22 compute-0 podman[122960]: 2025-10-08 16:04:22.137768631 +0000 UTC m=+0.090865138 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 08 16:04:22 compute-0 python3.9[122996]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939461.2312264-500-174368741079523/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:22 compute-0 python3.9[123162]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:04:23 compute-0 python3.9[123283]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939462.4851468-500-206363791486535/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:24 compute-0 python3.9[123433]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:04:24 compute-0 python3.9[123554]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939463.7015193-500-279207332365516/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:25 compute-0 python3.9[123704]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:04:25 compute-0 python3.9[123825]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939464.8900914-500-276308550061354/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:26 compute-0 python3.9[123975]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:04:27 compute-0 python3.9[124096]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939466.1670022-500-167614469436270/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:28 compute-0 python3.9[124246]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:04:28 compute-0 python3.9[124367]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939467.5625381-500-67630715396179/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:29 compute-0 python3.9[124517]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:04:30 compute-0 python3.9[124638]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939468.809377-500-213537792744330/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:30 compute-0 python3.9[124788]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:04:31 compute-0 python3.9[124909]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939470.2130542-500-191953904642407/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:32 compute-0 python3.9[125059]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:04:32 compute-0 podman[125109]: 2025-10-08 16:04:32.629033365 +0000 UTC m=+0.070898514 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 16:04:32 compute-0 python3.9[125144]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:33 compute-0 python3.9[125305]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:04:34 compute-0 python3.9[125381]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:34 compute-0 python3.9[125531]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:04:35 compute-0 python3.9[125608]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:36 compute-0 sudo[125758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvfmqxhelqojywazcfagjfavpaveibok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939475.7199423-878-239556495051763/AnsiballZ_file.py'
Oct 08 16:04:36 compute-0 sudo[125758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:36 compute-0 python3.9[125760]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:36 compute-0 sudo[125758]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:36 compute-0 sudo[125910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neohmexkeznlmnuwwjfsvzvtkjwwkgvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939476.5248728-894-58355764113205/AnsiballZ_file.py'
Oct 08 16:04:36 compute-0 sudo[125910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:37 compute-0 python3.9[125912]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:37 compute-0 sudo[125910]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:37 compute-0 sudo[126062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psduhhjyrzdoqrrzeukjkgorloazgczh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939477.3529572-910-3766837116412/AnsiballZ_file.py'
Oct 08 16:04:37 compute-0 sudo[126062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:37 compute-0 python3.9[126064]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:04:38 compute-0 sudo[126062]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:38 compute-0 sudo[126214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhltlmpdxcdvgmtgjoesxbfjdzimqxax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939478.1749218-926-255789252715871/AnsiballZ_systemd_service.py'
Oct 08 16:04:38 compute-0 sudo[126214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:38 compute-0 python3.9[126216]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:04:39 compute-0 systemd[1]: Reloading.
Oct 08 16:04:39 compute-0 systemd-rc-local-generator[126241]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:04:39 compute-0 systemd-sysv-generator[126245]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:04:39 compute-0 systemd[1]: Listening on Podman API Socket.
Oct 08 16:04:39 compute-0 sudo[126214]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:40 compute-0 sudo[126406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbkqwodmiooiexiynjtbpmzeefzjabov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939479.7239223-944-84166667299106/AnsiballZ_stat.py'
Oct 08 16:04:40 compute-0 sudo[126406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:40 compute-0 python3.9[126408]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:04:40 compute-0 sudo[126406]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:40 compute-0 sudo[126529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwzfwespidxtbzfqvfyzczrkoacjujvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939479.7239223-944-84166667299106/AnsiballZ_copy.py'
Oct 08 16:04:40 compute-0 sudo[126529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:40 compute-0 python3.9[126531]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759939479.7239223-944-84166667299106/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:04:40 compute-0 sudo[126529]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:41 compute-0 podman[126532]: 2025-10-08 16:04:41.079521321 +0000 UTC m=+0.090941750 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, container_name=iscsid, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:04:41 compute-0 sudo[126698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhhfjeqwbpgdpzshzclkfbglelxdfeoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939481.363228-978-281465672551151/AnsiballZ_container_config_data.py'
Oct 08 16:04:41 compute-0 sudo[126698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:04:41.861 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:04:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:04:41.862 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:04:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:04:41.862 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:04:42 compute-0 python3.9[126700]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Oct 08 16:04:42 compute-0 sudo[126698]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:42 compute-0 sudo[126851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwsweyextvptrhtxbujytrxkwnzggiat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939482.336915-996-11607831487466/AnsiballZ_container_config_hash.py'
Oct 08 16:04:42 compute-0 sudo[126851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:43 compute-0 python3.9[126853]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 08 16:04:43 compute-0 sudo[126851]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:43 compute-0 nova_compute[117413]: 2025-10-08 16:04:43.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:04:43 compute-0 nova_compute[117413]: 2025-10-08 16:04:43.364 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:04:43 compute-0 nova_compute[117413]: 2025-10-08 16:04:43.365 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:04:43 compute-0 nova_compute[117413]: 2025-10-08 16:04:43.365 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:04:43 compute-0 nova_compute[117413]: 2025-10-08 16:04:43.365 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:04:43 compute-0 nova_compute[117413]: 2025-10-08 16:04:43.366 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:04:43 compute-0 nova_compute[117413]: 2025-10-08 16:04:43.366 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:04:43 compute-0 nova_compute[117413]: 2025-10-08 16:04:43.366 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:04:43 compute-0 nova_compute[117413]: 2025-10-08 16:04:43.366 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:04:43 compute-0 nova_compute[117413]: 2025-10-08 16:04:43.884 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:04:43 compute-0 nova_compute[117413]: 2025-10-08 16:04:43.884 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:04:43 compute-0 nova_compute[117413]: 2025-10-08 16:04:43.884 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:04:43 compute-0 nova_compute[117413]: 2025-10-08 16:04:43.884 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:04:44 compute-0 nova_compute[117413]: 2025-10-08 16:04:44.086 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:04:44 compute-0 nova_compute[117413]: 2025-10-08 16:04:44.087 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:04:44 compute-0 sudo[127003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fykmkyumnkgwbbniotnatdirsjevbraq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759939483.4960072-1016-272914792035518/AnsiballZ_edpm_container_manage.py'
Oct 08 16:04:44 compute-0 sudo[127003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:44 compute-0 nova_compute[117413]: 2025-10-08 16:04:44.121 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.033s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:04:44 compute-0 nova_compute[117413]: 2025-10-08 16:04:44.122 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6523MB free_disk=73.47159194946289GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:04:44 compute-0 nova_compute[117413]: 2025-10-08 16:04:44.123 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:04:44 compute-0 nova_compute[117413]: 2025-10-08 16:04:44.124 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:04:44 compute-0 python3[127006]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct 08 16:04:45 compute-0 nova_compute[117413]: 2025-10-08 16:04:45.182 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:04:45 compute-0 nova_compute[117413]: 2025-10-08 16:04:45.184 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:04:44 up 12 min,  0 user,  load average: 0.75, 0.83, 0.49\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:04:45 compute-0 nova_compute[117413]: 2025-10-08 16:04:45.204 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:04:45 compute-0 nova_compute[117413]: 2025-10-08 16:04:45.712 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:04:46 compute-0 podman[127018]: 2025-10-08 16:04:46.073122028 +0000 UTC m=+1.655346315 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Oct 08 16:04:46 compute-0 nova_compute[117413]: 2025-10-08 16:04:46.223 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:04:46 compute-0 nova_compute[117413]: 2025-10-08 16:04:46.223 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.099s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:04:46 compute-0 podman[127114]: 2025-10-08 16:04:46.231182551 +0000 UTC m=+0.053441150 container create 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 08 16:04:46 compute-0 podman[127114]: 2025-10-08 16:04:46.203090742 +0000 UTC m=+0.025349351 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Oct 08 16:04:46 compute-0 python3[127006]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Oct 08 16:04:46 compute-0 sudo[127003]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:46 compute-0 podman[127153]: 2025-10-08 16:04:46.443902148 +0000 UTC m=+0.051432073 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.4)
Oct 08 16:04:46 compute-0 sudo[127320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hstdlythpqiznuesehwmaggqejbferqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939486.5259154-1032-167256626277163/AnsiballZ_stat.py'
Oct 08 16:04:46 compute-0 sudo[127320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:47 compute-0 python3.9[127322]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:04:47 compute-0 sudo[127320]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:47 compute-0 sudo[127474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbuaqnrcyqnyrlwrcslwoqefepnkhgiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939487.3011909-1050-95994682304871/AnsiballZ_file.py'
Oct 08 16:04:47 compute-0 sudo[127474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:47 compute-0 python3.9[127476]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:47 compute-0 sudo[127474]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:48 compute-0 sudo[127625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kebgoluoxyrilvxfybignzxjjsyduncb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939487.8744493-1050-37336462204698/AnsiballZ_copy.py'
Oct 08 16:04:48 compute-0 sudo[127625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:48 compute-0 python3.9[127627]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759939487.8744493-1050-37336462204698/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:04:48 compute-0 sudo[127625]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:48 compute-0 sudo[127701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcryubmpptfgyjuyajccmaioohhqeykr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939487.8744493-1050-37336462204698/AnsiballZ_systemd.py'
Oct 08 16:04:48 compute-0 sudo[127701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:49 compute-0 python3.9[127703]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 16:04:49 compute-0 systemd[1]: Reloading.
Oct 08 16:04:49 compute-0 systemd-sysv-generator[127735]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:04:49 compute-0 systemd-rc-local-generator[127731]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:04:49 compute-0 sudo[127701]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:49 compute-0 sudo[127812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqenhkjlvvvyughhnmdkqqbszpecccwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939487.8744493-1050-37336462204698/AnsiballZ_systemd.py'
Oct 08 16:04:49 compute-0 sudo[127812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:50 compute-0 python3.9[127814]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:04:50 compute-0 systemd[1]: Reloading.
Oct 08 16:04:50 compute-0 systemd-rc-local-generator[127842]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:04:50 compute-0 systemd-sysv-generator[127846]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:04:50 compute-0 systemd[1]: Starting podman_exporter container...
Oct 08 16:04:50 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aca714f5d26ad4f9dc8fbd66c35a1434227e7d668780aca4664afebc4aab0e6f/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 16:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aca714f5d26ad4f9dc8fbd66c35a1434227e7d668780aca4664afebc4aab0e6f/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 08 16:04:50 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913.
Oct 08 16:04:50 compute-0 podman[127854]: 2025-10-08 16:04:50.709126633 +0000 UTC m=+0.154537403 container init 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:04:50 compute-0 podman_exporter[127870]: ts=2025-10-08T16:04:50.724Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Oct 08 16:04:50 compute-0 podman_exporter[127870]: ts=2025-10-08T16:04:50.725Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Oct 08 16:04:50 compute-0 podman_exporter[127870]: ts=2025-10-08T16:04:50.725Z caller=handler.go:94 level=info msg="enabled collectors"
Oct 08 16:04:50 compute-0 podman_exporter[127870]: ts=2025-10-08T16:04:50.725Z caller=handler.go:105 level=info collector=container
Oct 08 16:04:50 compute-0 podman[127854]: 2025-10-08 16:04:50.733436663 +0000 UTC m=+0.178847423 container start 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:04:50 compute-0 podman[127854]: podman_exporter
Oct 08 16:04:50 compute-0 systemd[1]: Starting Podman API Service...
Oct 08 16:04:50 compute-0 systemd[1]: Started podman_exporter container.
Oct 08 16:04:50 compute-0 systemd[1]: Started Podman API Service.
Oct 08 16:04:50 compute-0 sudo[127812]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:50 compute-0 podman[127881]: time="2025-10-08T16:04:50Z" level=info msg="/usr/bin/podman filtering at log level info"
Oct 08 16:04:50 compute-0 podman[127881]: time="2025-10-08T16:04:50Z" level=info msg="Setting parallel job count to 25"
Oct 08 16:04:50 compute-0 podman[127881]: time="2025-10-08T16:04:50Z" level=info msg="Using sqlite as database backend"
Oct 08 16:04:50 compute-0 podman[127881]: time="2025-10-08T16:04:50Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Oct 08 16:04:50 compute-0 podman[127881]: time="2025-10-08T16:04:50Z" level=info msg="Using systemd socket activation to determine API endpoint"
Oct 08 16:04:50 compute-0 podman[127881]: time="2025-10-08T16:04:50Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Oct 08 16:04:50 compute-0 podman[127881]: @ - - [08/Oct/2025:16:04:50 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Oct 08 16:04:50 compute-0 podman[127881]: time="2025-10-08T16:04:50Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:04:50 compute-0 podman[127880]: 2025-10-08 16:04:50.813914062 +0000 UTC m=+0.066518897 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:04:50 compute-0 systemd[1]: 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913-1427d6439eebdc74.service: Main process exited, code=exited, status=1/FAILURE
Oct 08 16:04:50 compute-0 systemd[1]: 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913-1427d6439eebdc74.service: Failed with result 'exit-code'.
Oct 08 16:04:50 compute-0 podman[127881]: @ - - [08/Oct/2025:16:04:50 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 16547 "" "Go-http-client/1.1"
Oct 08 16:04:50 compute-0 podman_exporter[127870]: ts=2025-10-08T16:04:50.829Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Oct 08 16:04:50 compute-0 podman_exporter[127870]: ts=2025-10-08T16:04:50.830Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Oct 08 16:04:50 compute-0 podman_exporter[127870]: ts=2025-10-08T16:04:50.831Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Oct 08 16:04:51 compute-0 sudo[128069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkotrlvxgkyvcsiykhszmtlgtfswecjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939490.977548-1098-53938568278829/AnsiballZ_systemd.py'
Oct 08 16:04:51 compute-0 sudo[128069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:51 compute-0 python3.9[128071]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 16:04:51 compute-0 systemd[1]: Stopping podman_exporter container...
Oct 08 16:04:51 compute-0 podman[127881]: @ - - [08/Oct/2025:16:04:50 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Oct 08 16:04:51 compute-0 systemd[1]: libpod-53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913.scope: Deactivated successfully.
Oct 08 16:04:51 compute-0 podman[128075]: 2025-10-08 16:04:51.709772298 +0000 UTC m=+0.044984447 container died 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:04:51 compute-0 systemd[1]: 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913-1427d6439eebdc74.timer: Deactivated successfully.
Oct 08 16:04:51 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913.
Oct 08 16:04:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-aca714f5d26ad4f9dc8fbd66c35a1434227e7d668780aca4664afebc4aab0e6f-merged.mount: Deactivated successfully.
Oct 08 16:04:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913-userdata-shm.mount: Deactivated successfully.
Oct 08 16:04:51 compute-0 podman[128075]: 2025-10-08 16:04:51.894618293 +0000 UTC m=+0.229830452 container cleanup 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:04:51 compute-0 podman[128075]: podman_exporter
Oct 08 16:04:51 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 08 16:04:51 compute-0 podman[128101]: podman_exporter
Oct 08 16:04:51 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Oct 08 16:04:51 compute-0 systemd[1]: Stopped podman_exporter container.
Oct 08 16:04:51 compute-0 systemd[1]: Starting podman_exporter container...
Oct 08 16:04:52 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aca714f5d26ad4f9dc8fbd66c35a1434227e7d668780aca4664afebc4aab0e6f/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 16:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aca714f5d26ad4f9dc8fbd66c35a1434227e7d668780aca4664afebc4aab0e6f/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 08 16:04:52 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913.
Oct 08 16:04:52 compute-0 podman[128115]: 2025-10-08 16:04:52.098485846 +0000 UTC m=+0.110032791 container init 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:04:52 compute-0 podman_exporter[128130]: ts=2025-10-08T16:04:52.112Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Oct 08 16:04:52 compute-0 podman_exporter[128130]: ts=2025-10-08T16:04:52.112Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Oct 08 16:04:52 compute-0 podman_exporter[128130]: ts=2025-10-08T16:04:52.112Z caller=handler.go:94 level=info msg="enabled collectors"
Oct 08 16:04:52 compute-0 podman_exporter[128130]: ts=2025-10-08T16:04:52.112Z caller=handler.go:105 level=info collector=container
Oct 08 16:04:52 compute-0 podman[127881]: @ - - [08/Oct/2025:16:04:52 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Oct 08 16:04:52 compute-0 podman[127881]: time="2025-10-08T16:04:52Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:04:52 compute-0 podman[128115]: 2025-10-08 16:04:52.1267456 +0000 UTC m=+0.138292515 container start 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 08 16:04:52 compute-0 podman[128115]: podman_exporter
Oct 08 16:04:52 compute-0 podman[127881]: @ - - [08/Oct/2025:16:04:52 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 16549 "" "Go-http-client/1.1"
Oct 08 16:04:52 compute-0 podman_exporter[128130]: ts=2025-10-08T16:04:52.134Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Oct 08 16:04:52 compute-0 podman_exporter[128130]: ts=2025-10-08T16:04:52.134Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Oct 08 16:04:52 compute-0 podman_exporter[128130]: ts=2025-10-08T16:04:52.134Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Oct 08 16:04:52 compute-0 systemd[1]: Started podman_exporter container.
Oct 08 16:04:52 compute-0 sudo[128069]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:52 compute-0 podman[128140]: 2025-10-08 16:04:52.227727739 +0000 UTC m=+0.090275542 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:04:52 compute-0 podman[128168]: 2025-10-08 16:04:52.299043963 +0000 UTC m=+0.076633829 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.4)
Oct 08 16:04:52 compute-0 sudo[128338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnmhevcixmwyjnsnwqpojajkjxwyucdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939492.3794448-1114-108467008618677/AnsiballZ_stat.py'
Oct 08 16:04:52 compute-0 sudo[128338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:52 compute-0 python3.9[128340]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:04:52 compute-0 sudo[128338]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:53 compute-0 sudo[128461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taytlyrvjmncrdwsslmlyqzhnribvffl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939492.3794448-1114-108467008618677/AnsiballZ_copy.py'
Oct 08 16:04:53 compute-0 sudo[128461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:53 compute-0 python3.9[128463]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759939492.3794448-1114-108467008618677/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 08 16:04:53 compute-0 sudo[128461]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:54 compute-0 sudo[128613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asjuxtsavhwcmolwgksnenwgtqmsoaij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939493.7894773-1148-268427241651388/AnsiballZ_container_config_data.py'
Oct 08 16:04:54 compute-0 sudo[128613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:54 compute-0 python3.9[128615]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Oct 08 16:04:54 compute-0 sudo[128613]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:54 compute-0 sudo[128765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jltcjavtcjydkuxqdzpafzcqviagsfhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939494.5795653-1166-150515535718307/AnsiballZ_container_config_hash.py'
Oct 08 16:04:54 compute-0 sudo[128765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:55 compute-0 python3.9[128767]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 08 16:04:55 compute-0 sudo[128765]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:55 compute-0 sudo[128917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dizfcmmckgacxafxzxzpdvnuuzfqhaas ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759939495.4054682-1186-83722164464873/AnsiballZ_edpm_container_manage.py'
Oct 08 16:04:55 compute-0 sudo[128917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:04:55 compute-0 python3[128919]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct 08 16:04:58 compute-0 podman[128932]: 2025-10-08 16:04:58.866234519 +0000 UTC m=+2.785029807 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct 08 16:04:59 compute-0 podman[129031]: 2025-10-08 16:04:59.031612563 +0000 UTC m=+0.061215634 container create 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, build-date=2025-08-20T13:12:41, version=9.6, vcs-type=git, architecture=x86_64)
Oct 08 16:04:59 compute-0 podman[129031]: 2025-10-08 16:04:58.996695837 +0000 UTC m=+0.026298918 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct 08 16:04:59 compute-0 python3[128919]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct 08 16:04:59 compute-0 sudo[128917]: pam_unix(sudo:session): session closed for user root
Oct 08 16:04:59 compute-0 sudo[129221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgtvwhkuvihiusrxlsmtlgjfnzbgtzti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939499.42817-1202-113718736233139/AnsiballZ_stat.py'
Oct 08 16:04:59 compute-0 sudo[129221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:00 compute-0 python3.9[129223]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:05:00 compute-0 sudo[129221]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:00 compute-0 sudo[129375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrovyisclbxthgrczdbjycdvrsklepky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939500.353795-1220-163604487108230/AnsiballZ_file.py'
Oct 08 16:05:00 compute-0 sudo[129375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:00 compute-0 python3.9[129377]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:05:00 compute-0 sudo[129375]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:01 compute-0 sudo[129526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlkosjazkueakohfbfvapzmctdpvfxbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939501.0430884-1220-257342623004135/AnsiballZ_copy.py'
Oct 08 16:05:01 compute-0 sudo[129526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:01 compute-0 python3.9[129528]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759939501.0430884-1220-257342623004135/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:05:01 compute-0 sudo[129526]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:02 compute-0 sudo[129602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukpgfzhtbqexvigfvevxnacxomflpeid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939501.0430884-1220-257342623004135/AnsiballZ_systemd.py'
Oct 08 16:05:02 compute-0 sudo[129602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:02 compute-0 python3.9[129604]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 08 16:05:02 compute-0 systemd[1]: Reloading.
Oct 08 16:05:02 compute-0 systemd-rc-local-generator[129631]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:05:02 compute-0 systemd-sysv-generator[129636]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:05:02 compute-0 sudo[129602]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:02 compute-0 podman[129640]: 2025-10-08 16:05:02.839407831 +0000 UTC m=+0.101931864 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 08 16:05:02 compute-0 sudo[129734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wafcrgednopyhhpqxuttlwbevrulnspw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939501.0430884-1220-257342623004135/AnsiballZ_systemd.py'
Oct 08 16:05:03 compute-0 sudo[129734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:03 compute-0 python3.9[129736]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:05:03 compute-0 systemd[1]: Reloading.
Oct 08 16:05:03 compute-0 systemd-rc-local-generator[129764]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:05:03 compute-0 systemd-sysv-generator[129768]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:05:03 compute-0 systemd[1]: Starting openstack_network_exporter container...
Oct 08 16:05:03 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072008012e5902c265129297eb5f74d693aaceca607cc8974091b05886d05619/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 08 16:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072008012e5902c265129297eb5f74d693aaceca607cc8974091b05886d05619/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 16:05:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072008012e5902c265129297eb5f74d693aaceca607cc8974091b05886d05619/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 08 16:05:03 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86.
Oct 08 16:05:03 compute-0 podman[129775]: 2025-10-08 16:05:03.904570158 +0000 UTC m=+0.168435987 container init 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Oct 08 16:05:03 compute-0 openstack_network_exporter[129791]: INFO    16:05:03 main.go:48: registering *bridge.Collector
Oct 08 16:05:03 compute-0 openstack_network_exporter[129791]: INFO    16:05:03 main.go:48: registering *coverage.Collector
Oct 08 16:05:03 compute-0 openstack_network_exporter[129791]: INFO    16:05:03 main.go:48: registering *datapath.Collector
Oct 08 16:05:03 compute-0 openstack_network_exporter[129791]: INFO    16:05:03 main.go:48: registering *iface.Collector
Oct 08 16:05:03 compute-0 openstack_network_exporter[129791]: INFO    16:05:03 main.go:48: registering *memory.Collector
Oct 08 16:05:03 compute-0 openstack_network_exporter[129791]: INFO    16:05:03 main.go:48: registering *ovnnorthd.Collector
Oct 08 16:05:03 compute-0 openstack_network_exporter[129791]: INFO    16:05:03 main.go:48: registering *ovn.Collector
Oct 08 16:05:03 compute-0 openstack_network_exporter[129791]: INFO    16:05:03 main.go:48: registering *ovsdbserver.Collector
Oct 08 16:05:03 compute-0 openstack_network_exporter[129791]: INFO    16:05:03 main.go:48: registering *pmd_perf.Collector
Oct 08 16:05:03 compute-0 openstack_network_exporter[129791]: INFO    16:05:03 main.go:48: registering *pmd_rxq.Collector
Oct 08 16:05:03 compute-0 openstack_network_exporter[129791]: INFO    16:05:03 main.go:48: registering *vswitch.Collector
Oct 08 16:05:03 compute-0 openstack_network_exporter[129791]: NOTICE  16:05:03 main.go:76: listening on https://:9105/metrics
Oct 08 16:05:03 compute-0 podman[129775]: 2025-10-08 16:05:03.93560459 +0000 UTC m=+0.199470329 container start 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7, distribution-scope=public, managed_by=edpm_ansible)
Oct 08 16:05:03 compute-0 podman[129775]: openstack_network_exporter
Oct 08 16:05:03 compute-0 systemd[1]: Started openstack_network_exporter container.
Oct 08 16:05:03 compute-0 sudo[129734]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:04 compute-0 podman[129801]: 2025-10-08 16:05:04.023896562 +0000 UTC m=+0.076924880 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, release=1755695350, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, managed_by=edpm_ansible, version=9.6, io.openshift.tags=minimal rhel9)
Oct 08 16:05:04 compute-0 sudo[129975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsbkdisvlifawjoywcntsjnfkerjtwra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939504.1659472-1268-159140216558832/AnsiballZ_systemd.py'
Oct 08 16:05:04 compute-0 sudo[129975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:04 compute-0 python3.9[129977]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 16:05:04 compute-0 systemd[1]: Stopping openstack_network_exporter container...
Oct 08 16:05:04 compute-0 systemd[1]: libpod-3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86.scope: Deactivated successfully.
Oct 08 16:05:04 compute-0 podman[129981]: 2025-10-08 16:05:04.925042352 +0000 UTC m=+0.034476973 container stop 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, managed_by=edpm_ansible, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, version=9.6, config_id=edpm, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 08 16:05:04 compute-0 podman[129981]: 2025-10-08 16:05:04.955138076 +0000 UTC m=+0.064572717 container died 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter)
Oct 08 16:05:04 compute-0 systemd[1]: 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86-f754b03b786a23d.timer: Deactivated successfully.
Oct 08 16:05:04 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86.
Oct 08 16:05:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86-userdata-shm.mount: Deactivated successfully.
Oct 08 16:05:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-072008012e5902c265129297eb5f74d693aaceca607cc8974091b05886d05619-merged.mount: Deactivated successfully.
Oct 08 16:05:05 compute-0 podman[129981]: 2025-10-08 16:05:05.726523907 +0000 UTC m=+0.835958548 container cleanup 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, version=9.6, build-date=2025-08-20T13:12:41)
Oct 08 16:05:05 compute-0 podman[129981]: openstack_network_exporter
Oct 08 16:05:05 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 08 16:05:05 compute-0 podman[130010]: openstack_network_exporter
Oct 08 16:05:05 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Oct 08 16:05:05 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Oct 08 16:05:05 compute-0 systemd[1]: Starting openstack_network_exporter container...
Oct 08 16:05:05 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:05:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072008012e5902c265129297eb5f74d693aaceca607cc8974091b05886d05619/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 08 16:05:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072008012e5902c265129297eb5f74d693aaceca607cc8974091b05886d05619/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 08 16:05:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/072008012e5902c265129297eb5f74d693aaceca607cc8974091b05886d05619/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 08 16:05:05 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86.
Oct 08 16:05:05 compute-0 podman[130023]: 2025-10-08 16:05:05.976233999 +0000 UTC m=+0.141915758 container init 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_id=edpm, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9)
Oct 08 16:05:05 compute-0 openstack_network_exporter[130039]: INFO    16:05:05 main.go:48: registering *bridge.Collector
Oct 08 16:05:05 compute-0 openstack_network_exporter[130039]: INFO    16:05:05 main.go:48: registering *coverage.Collector
Oct 08 16:05:05 compute-0 openstack_network_exporter[130039]: INFO    16:05:05 main.go:48: registering *datapath.Collector
Oct 08 16:05:05 compute-0 openstack_network_exporter[130039]: INFO    16:05:05 main.go:48: registering *iface.Collector
Oct 08 16:05:05 compute-0 openstack_network_exporter[130039]: INFO    16:05:05 main.go:48: registering *memory.Collector
Oct 08 16:05:05 compute-0 openstack_network_exporter[130039]: INFO    16:05:05 main.go:48: registering *ovnnorthd.Collector
Oct 08 16:05:05 compute-0 openstack_network_exporter[130039]: INFO    16:05:05 main.go:48: registering *ovn.Collector
Oct 08 16:05:05 compute-0 openstack_network_exporter[130039]: INFO    16:05:05 main.go:48: registering *ovsdbserver.Collector
Oct 08 16:05:05 compute-0 openstack_network_exporter[130039]: INFO    16:05:05 main.go:48: registering *pmd_perf.Collector
Oct 08 16:05:05 compute-0 openstack_network_exporter[130039]: INFO    16:05:05 main.go:48: registering *pmd_rxq.Collector
Oct 08 16:05:05 compute-0 openstack_network_exporter[130039]: INFO    16:05:05 main.go:48: registering *vswitch.Collector
Oct 08 16:05:05 compute-0 openstack_network_exporter[130039]: NOTICE  16:05:05 main.go:76: listening on https://:9105/metrics
Oct 08 16:05:06 compute-0 podman[130023]: 2025-10-08 16:05:06.004436837 +0000 UTC m=+0.170118546 container start 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 08 16:05:06 compute-0 podman[130023]: openstack_network_exporter
Oct 08 16:05:06 compute-0 systemd[1]: Started openstack_network_exporter container.
Oct 08 16:05:06 compute-0 sudo[129975]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:06 compute-0 podman[130049]: 2025-10-08 16:05:06.095008887 +0000 UTC m=+0.071170951 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 08 16:05:06 compute-0 sudo[130220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfixmfhloousmpxhverckywmsuwziknq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939506.2414925-1284-258866497325102/AnsiballZ_find.py'
Oct 08 16:05:06 compute-0 sudo[130220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:06 compute-0 python3.9[130222]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 08 16:05:06 compute-0 sudo[130220]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:07 compute-0 sudo[130372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypwmwytejaehubouqopuwkrflldhaaqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939507.22203-1303-10591574494440/AnsiballZ_podman_container_info.py'
Oct 08 16:05:07 compute-0 sudo[130372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:07 compute-0 python3.9[130374]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Oct 08 16:05:08 compute-0 sudo[130372]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:08 compute-0 sudo[130537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwkrspxrhhvuideftprsmpomkgvofxhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939508.306278-1311-232769085382992/AnsiballZ_podman_container_exec.py'
Oct 08 16:05:08 compute-0 sudo[130537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:09 compute-0 python3.9[130539]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 08 16:05:09 compute-0 systemd[1]: Started libpod-conmon-de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4.scope.
Oct 08 16:05:09 compute-0 podman[130540]: 2025-10-08 16:05:09.24052003 +0000 UTC m=+0.122235510 container exec de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller)
Oct 08 16:05:09 compute-0 podman[130540]: 2025-10-08 16:05:09.248188546 +0000 UTC m=+0.129904086 container exec_died de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:05:09 compute-0 sudo[130537]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:09 compute-0 systemd[1]: libpod-conmon-de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4.scope: Deactivated successfully.
Oct 08 16:05:09 compute-0 sudo[130721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osscntxozlytzmbckvuwxutjjzuiwurm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939509.5017056-1319-74171247757271/AnsiballZ_podman_container_exec.py'
Oct 08 16:05:09 compute-0 sudo[130721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:10 compute-0 python3.9[130723]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 08 16:05:10 compute-0 systemd[1]: Started libpod-conmon-de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4.scope.
Oct 08 16:05:10 compute-0 podman[130724]: 2025-10-08 16:05:10.216831929 +0000 UTC m=+0.093785985 container exec de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 08 16:05:10 compute-0 podman[130724]: 2025-10-08 16:05:10.225005739 +0000 UTC m=+0.101959785 container exec_died de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 08 16:05:10 compute-0 sudo[130721]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:10 compute-0 systemd[1]: libpod-conmon-de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4.scope: Deactivated successfully.
Oct 08 16:05:10 compute-0 sudo[130905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhsvmldnzumpfhfqowyowkncmtglalur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939510.4912682-1327-78360922956195/AnsiballZ_file.py'
Oct 08 16:05:10 compute-0 sudo[130905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:11 compute-0 python3.9[130907]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:05:11 compute-0 sudo[130905]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:11 compute-0 podman[130968]: 2025-10-08 16:05:11.505991392 +0000 UTC m=+0.103701226 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20251007)
Oct 08 16:05:11 compute-0 sudo[131077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-welijvzjgmvazgirkyllvzqcqigfyfpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939511.3267844-1336-1095940437218/AnsiballZ_podman_container_info.py'
Oct 08 16:05:11 compute-0 sudo[131077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:11 compute-0 python3.9[131079]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Oct 08 16:05:11 compute-0 sudo[131077]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:12 compute-0 sudo[131242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrxnsaplpdvkgrascirffkhvicoeyrdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939512.1648703-1344-258263792152592/AnsiballZ_podman_container_exec.py'
Oct 08 16:05:12 compute-0 sudo[131242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:12 compute-0 python3.9[131244]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 08 16:05:12 compute-0 systemd[1]: Started libpod-conmon-c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a.scope.
Oct 08 16:05:12 compute-0 podman[131245]: 2025-10-08 16:05:12.85620429 +0000 UTC m=+0.099954856 container exec c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:05:12 compute-0 podman[131245]: 2025-10-08 16:05:12.894370891 +0000 UTC m=+0.138121367 container exec_died c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:05:12 compute-0 systemd[1]: libpod-conmon-c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a.scope: Deactivated successfully.
Oct 08 16:05:12 compute-0 sudo[131242]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:13 compute-0 sudo[131425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tialsxzbtfbyrssmjvorhtcgcdsuizyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939513.1695023-1352-223566004473397/AnsiballZ_podman_container_exec.py'
Oct 08 16:05:13 compute-0 sudo[131425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:13 compute-0 python3.9[131427]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 08 16:05:13 compute-0 systemd[1]: Started libpod-conmon-c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a.scope.
Oct 08 16:05:13 compute-0 podman[131428]: 2025-10-08 16:05:13.912086325 +0000 UTC m=+0.112419852 container exec c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 16:05:13 compute-0 podman[131428]: 2025-10-08 16:05:13.948915406 +0000 UTC m=+0.149249033 container exec_died c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct 08 16:05:13 compute-0 systemd[1]: libpod-conmon-c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a.scope: Deactivated successfully.
Oct 08 16:05:14 compute-0 sudo[131425]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:14 compute-0 sudo[131608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifjorjhqqvibxhgxxrlmkpgzpiworosx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939514.2193904-1360-248009135438362/AnsiballZ_file.py'
Oct 08 16:05:14 compute-0 sudo[131608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:14 compute-0 python3.9[131610]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:05:14 compute-0 sudo[131608]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:15 compute-0 sudo[131760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbutubocytilazkihbsftsteeyeniopz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939515.0536847-1369-125343775249037/AnsiballZ_podman_container_info.py'
Oct 08 16:05:15 compute-0 sudo[131760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:15 compute-0 python3.9[131762]: ansible-containers.podman.podman_container_info Invoked with name=['iscsid'] executable=podman
Oct 08 16:05:15 compute-0 sudo[131760]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:16 compute-0 sudo[131924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hadpnsutvxtftysyeqvgpqwlmaerzxgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939515.9586132-1377-23597264465038/AnsiballZ_podman_container_exec.py'
Oct 08 16:05:16 compute-0 sudo[131924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:16 compute-0 python3.9[131926]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 08 16:05:16 compute-0 systemd[1]: Started libpod-conmon-5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0.scope.
Oct 08 16:05:16 compute-0 podman[131927]: 2025-10-08 16:05:16.613209009 +0000 UTC m=+0.098111232 container exec 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, container_name=iscsid, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 08 16:05:16 compute-0 podman[131927]: 2025-10-08 16:05:16.647550498 +0000 UTC m=+0.132452741 container exec_died 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251007)
Oct 08 16:05:16 compute-0 systemd[1]: libpod-conmon-5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0.scope: Deactivated successfully.
Oct 08 16:05:16 compute-0 sudo[131924]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:16 compute-0 podman[131944]: 2025-10-08 16:05:16.727070213 +0000 UTC m=+0.095484105 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 08 16:05:17 compute-0 sudo[132124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwfnecqbcgyteigizzglunlfeqvkozvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939516.8893678-1385-119083015344948/AnsiballZ_podman_container_exec.py'
Oct 08 16:05:17 compute-0 sudo[132124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:17 compute-0 python3.9[132126]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 08 16:05:17 compute-0 systemd[1]: Started libpod-conmon-5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0.scope.
Oct 08 16:05:17 compute-0 podman[132127]: 2025-10-08 16:05:17.584566932 +0000 UTC m=+0.100660087 container exec 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 08 16:05:17 compute-0 podman[132127]: 2025-10-08 16:05:17.621410004 +0000 UTC m=+0.137503339 container exec_died 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 08 16:05:17 compute-0 systemd[1]: libpod-conmon-5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0.scope: Deactivated successfully.
Oct 08 16:05:17 compute-0 sudo[132124]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:18 compute-0 sudo[132309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehxgnvoqqtbydlazvoeihofpmkjxcfab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939517.854389-1393-154157830203371/AnsiballZ_file.py'
Oct 08 16:05:18 compute-0 sudo[132309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:18 compute-0 python3.9[132311]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/iscsid recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:05:18 compute-0 sudo[132309]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:19 compute-0 sudo[132461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vflehdnnjocwtsozwtcqavsbxwoyayxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939518.745346-1402-55047791670203/AnsiballZ_podman_container_info.py'
Oct 08 16:05:19 compute-0 sudo[132461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:19 compute-0 python3.9[132463]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Oct 08 16:05:19 compute-0 sudo[132461]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:19 compute-0 sudo[132626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbqhbwetbagunnffnikqirktymqcnnon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939519.639064-1410-237189577544310/AnsiballZ_podman_container_exec.py'
Oct 08 16:05:19 compute-0 sudo[132626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:20 compute-0 python3.9[132628]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 08 16:05:20 compute-0 systemd[1]: Started libpod-conmon-02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214.scope.
Oct 08 16:05:20 compute-0 podman[132629]: 2025-10-08 16:05:20.335143889 +0000 UTC m=+0.107922070 container exec 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 08 16:05:20 compute-0 podman[132629]: 2025-10-08 16:05:20.366058996 +0000 UTC m=+0.138837147 container exec_died 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 08 16:05:20 compute-0 systemd[1]: libpod-conmon-02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214.scope: Deactivated successfully.
Oct 08 16:05:20 compute-0 sudo[132626]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:20 compute-0 sudo[132809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bntrihfaklpkgfuzduvmiboxyoxsuxid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939520.6066875-1418-213482306146075/AnsiballZ_podman_container_exec.py'
Oct 08 16:05:20 compute-0 sudo[132809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:21 compute-0 python3.9[132811]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 08 16:05:21 compute-0 systemd[1]: Started libpod-conmon-02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214.scope.
Oct 08 16:05:21 compute-0 podman[132812]: 2025-10-08 16:05:21.329370393 +0000 UTC m=+0.099091671 container exec 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 08 16:05:21 compute-0 podman[132812]: 2025-10-08 16:05:21.365416892 +0000 UTC m=+0.135138140 container exec_died 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0)
Oct 08 16:05:21 compute-0 systemd[1]: libpod-conmon-02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214.scope: Deactivated successfully.
Oct 08 16:05:21 compute-0 sudo[132809]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:21 compute-0 sudo[132995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddgimvxvzzzyyenzlohuysupnrqiiekb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939521.5672214-1426-189924054722883/AnsiballZ_file.py'
Oct 08 16:05:21 compute-0 sudo[132995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:22 compute-0 python3.9[132997]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:05:22 compute-0 sudo[132995]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:22 compute-0 podman[133051]: 2025-10-08 16:05:22.506061524 +0000 UTC m=+0.094353741 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:05:22 compute-0 podman[133059]: 2025-10-08 16:05:22.53113332 +0000 UTC m=+0.119444888 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 08 16:05:22 compute-0 sudo[133196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbxhpplwctjrqsuojlsugvmnbkfmgqcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939522.3420615-1435-143613804737613/AnsiballZ_podman_container_info.py'
Oct 08 16:05:22 compute-0 sudo[133196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:22 compute-0 python3.9[133198]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Oct 08 16:05:22 compute-0 sudo[133196]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:23 compute-0 sudo[133361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrievmyajbjvpyufwgigbabeadiqgmmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939523.1666808-1443-2495029673926/AnsiballZ_podman_container_exec.py'
Oct 08 16:05:23 compute-0 sudo[133361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:23 compute-0 python3.9[133363]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 08 16:05:23 compute-0 systemd[1]: Started libpod-conmon-53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913.scope.
Oct 08 16:05:23 compute-0 podman[133364]: 2025-10-08 16:05:23.880785612 +0000 UTC m=+0.106821458 container exec 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:05:23 compute-0 podman[133364]: 2025-10-08 16:05:23.916467759 +0000 UTC m=+0.142503555 container exec_died 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:05:23 compute-0 systemd[1]: libpod-conmon-53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913.scope: Deactivated successfully.
Oct 08 16:05:23 compute-0 sudo[133361]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:24 compute-0 sudo[133545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewnxsihtzflynisekstnkmppqtmlmmqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939524.1722245-1451-20568816229653/AnsiballZ_podman_container_exec.py'
Oct 08 16:05:24 compute-0 sudo[133545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:24 compute-0 python3.9[133547]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 08 16:05:24 compute-0 systemd[1]: Started libpod-conmon-53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913.scope.
Oct 08 16:05:24 compute-0 podman[133548]: 2025-10-08 16:05:24.844141769 +0000 UTC m=+0.076567269 container exec 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 08 16:05:24 compute-0 podman[133548]: 2025-10-08 16:05:24.880488617 +0000 UTC m=+0.112914097 container exec_died 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 08 16:05:24 compute-0 systemd[1]: libpod-conmon-53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913.scope: Deactivated successfully.
Oct 08 16:05:24 compute-0 sudo[133545]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:25 compute-0 sudo[133729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lghqgpedhbkfoxqkstupgjtwxszvbxux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939525.139085-1459-185145728622718/AnsiballZ_file.py'
Oct 08 16:05:25 compute-0 sudo[133729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:25 compute-0 python3.9[133731]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:05:25 compute-0 sudo[133729]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:26 compute-0 sudo[133881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lycdvoixbdxyowbvqhdbgsiofldfuflm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939525.93276-1468-20409161402190/AnsiballZ_podman_container_info.py'
Oct 08 16:05:26 compute-0 sudo[133881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:26 compute-0 python3.9[133883]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Oct 08 16:05:26 compute-0 sudo[133881]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:27 compute-0 sudo[134047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jupdjxnfukkdaaukyqgzriklgdoecund ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939526.8724046-1476-213488004399526/AnsiballZ_podman_container_exec.py'
Oct 08 16:05:27 compute-0 sudo[134047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:27 compute-0 python3.9[134049]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 08 16:05:27 compute-0 systemd[1]: Started libpod-conmon-3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86.scope.
Oct 08 16:05:27 compute-0 podman[134050]: 2025-10-08 16:05:27.54174691 +0000 UTC m=+0.069627565 container exec 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41)
Oct 08 16:05:27 compute-0 podman[134050]: 2025-10-08 16:05:27.574272806 +0000 UTC m=+0.102153461 container exec_died 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 08 16:05:27 compute-0 systemd[1]: libpod-conmon-3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86.scope: Deactivated successfully.
Oct 08 16:05:27 compute-0 sudo[134047]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:28 compute-0 sudo[134232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytaddmorjraaqcveayycdsdvpkuzboyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939527.7867556-1484-81791287813028/AnsiballZ_podman_container_exec.py'
Oct 08 16:05:28 compute-0 sudo[134232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:28 compute-0 python3.9[134234]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 08 16:05:28 compute-0 systemd[1]: Started libpod-conmon-3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86.scope.
Oct 08 16:05:28 compute-0 podman[134235]: 2025-10-08 16:05:28.426340806 +0000 UTC m=+0.099552155 container exec 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-type=git, distribution-scope=public, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=)
Oct 08 16:05:28 compute-0 podman[134235]: 2025-10-08 16:05:28.462397334 +0000 UTC m=+0.135608673 container exec_died 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 08 16:05:28 compute-0 systemd[1]: libpod-conmon-3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86.scope: Deactivated successfully.
Oct 08 16:05:28 compute-0 sudo[134232]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:29 compute-0 sudo[134417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czbizycifruaauqqfomabdzrapylelxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939528.7330675-1492-275489437517788/AnsiballZ_file.py'
Oct 08 16:05:29 compute-0 sudo[134417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:29 compute-0 python3.9[134419]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:05:29 compute-0 sudo[134417]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:33 compute-0 podman[134444]: 2025-10-08 16:05:33.509840355 +0000 UTC m=+0.106247761 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.4)
Oct 08 16:05:36 compute-0 podman[134468]: 2025-10-08 16:05:36.451656478 +0000 UTC m=+0.062631279 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-type=git, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible)
Oct 08 16:05:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:05:41.864 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:05:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:05:41.865 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:05:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:05:41.865 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:05:42 compute-0 podman[134490]: 2025-10-08 16:05:42.457406388 +0000 UTC m=+0.071013895 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:05:46 compute-0 nova_compute[117413]: 2025-10-08 16:05:46.217 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:05:46 compute-0 nova_compute[117413]: 2025-10-08 16:05:46.217 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:05:46 compute-0 nova_compute[117413]: 2025-10-08 16:05:46.728 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:05:46 compute-0 nova_compute[117413]: 2025-10-08 16:05:46.729 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:05:46 compute-0 nova_compute[117413]: 2025-10-08 16:05:46.729 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:05:46 compute-0 nova_compute[117413]: 2025-10-08 16:05:46.729 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:05:46 compute-0 nova_compute[117413]: 2025-10-08 16:05:46.729 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:05:46 compute-0 nova_compute[117413]: 2025-10-08 16:05:46.730 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:05:46 compute-0 nova_compute[117413]: 2025-10-08 16:05:46.730 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:05:46 compute-0 nova_compute[117413]: 2025-10-08 16:05:46.730 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:05:47 compute-0 nova_compute[117413]: 2025-10-08 16:05:47.246 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:05:47 compute-0 nova_compute[117413]: 2025-10-08 16:05:47.247 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:05:47 compute-0 nova_compute[117413]: 2025-10-08 16:05:47.247 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:05:47 compute-0 nova_compute[117413]: 2025-10-08 16:05:47.247 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:05:47 compute-0 nova_compute[117413]: 2025-10-08 16:05:47.399 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:05:47 compute-0 nova_compute[117413]: 2025-10-08 16:05:47.400 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:05:47 compute-0 podman[134561]: 2025-10-08 16:05:47.43610242 +0000 UTC m=+0.047283149 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct 08 16:05:47 compute-0 nova_compute[117413]: 2025-10-08 16:05:47.436 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.036s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:05:47 compute-0 nova_compute[117413]: 2025-10-08 16:05:47.437 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6475MB free_disk=73.3062515258789GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:05:47 compute-0 nova_compute[117413]: 2025-10-08 16:05:47.437 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:05:47 compute-0 nova_compute[117413]: 2025-10-08 16:05:47.437 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:05:47 compute-0 sudo[134654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szrfnvsgvtviodzebpydfftjeruzaxkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939547.2969327-1700-165745223297808/AnsiballZ_file.py'
Oct 08 16:05:47 compute-0 sudo[134654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:47 compute-0 python3.9[134656]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:05:47 compute-0 sudo[134654]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:48 compute-0 sudo[134806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjimvztczsqvtcunqjxefeuqpffhzycu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939547.9786646-1716-150845812160976/AnsiballZ_stat.py'
Oct 08 16:05:48 compute-0 sudo[134806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:48 compute-0 nova_compute[117413]: 2025-10-08 16:05:48.482 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:05:48 compute-0 nova_compute[117413]: 2025-10-08 16:05:48.483 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:05:47 up 13 min,  0 user,  load average: 0.44, 0.72, 0.47\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:05:48 compute-0 nova_compute[117413]: 2025-10-08 16:05:48.508 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:05:48 compute-0 python3.9[134808]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:05:48 compute-0 sudo[134806]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:49 compute-0 nova_compute[117413]: 2025-10-08 16:05:49.015 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:05:49 compute-0 sudo[134929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzlsrpsmsptwzgldusmjsmxpihcmvzha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939547.9786646-1716-150845812160976/AnsiballZ_copy.py'
Oct 08 16:05:49 compute-0 sudo[134929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:49 compute-0 python3.9[134931]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939547.9786646-1716-150845812160976/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:05:49 compute-0 sudo[134929]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:49 compute-0 nova_compute[117413]: 2025-10-08 16:05:49.526 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:05:49 compute-0 nova_compute[117413]: 2025-10-08 16:05:49.526 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.089s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:05:49 compute-0 sudo[135081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfrnevpfctujjrschxtcirnghrdnsqfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939549.5576632-1748-210179834509527/AnsiballZ_file.py'
Oct 08 16:05:49 compute-0 sudo[135081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:50 compute-0 python3.9[135083]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:05:50 compute-0 sudo[135081]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:50 compute-0 sudo[135233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkmjowlljzqddrlfbcobuqbioeawrbsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939550.2813663-1764-18262561260250/AnsiballZ_stat.py'
Oct 08 16:05:50 compute-0 sudo[135233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:50 compute-0 python3.9[135235]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:05:50 compute-0 sudo[135233]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:51 compute-0 sudo[135311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwszaabirpwrnltiwuadqjqbppaqwczl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939550.2813663-1764-18262561260250/AnsiballZ_file.py'
Oct 08 16:05:51 compute-0 sudo[135311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:51 compute-0 python3.9[135313]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:05:51 compute-0 sudo[135311]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:51 compute-0 sudo[135463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsarmltdbtcanebcxufwixgupilrrlak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939551.4775112-1788-114736188191615/AnsiballZ_stat.py'
Oct 08 16:05:51 compute-0 sudo[135463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:52 compute-0 python3.9[135465]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:05:52 compute-0 sudo[135463]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:52 compute-0 sudo[135541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwskljrtfxcvcgrwkhyuwmwwggenfuph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939551.4775112-1788-114736188191615/AnsiballZ_file.py'
Oct 08 16:05:52 compute-0 sudo[135541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:52 compute-0 python3.9[135543]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.h5vgdujn recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:05:52 compute-0 sudo[135541]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:53 compute-0 sudo[135722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hplpprioniyebnkycivlgaivftksosxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939552.818731-1812-80432750577882/AnsiballZ_stat.py'
Oct 08 16:05:53 compute-0 sudo[135722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:53 compute-0 podman[135667]: 2025-10-08 16:05:53.139275766 +0000 UTC m=+0.067365949 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 08 16:05:53 compute-0 podman[135668]: 2025-10-08 16:05:53.188138151 +0000 UTC m=+0.111783223 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:05:53 compute-0 python3.9[135736]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:05:53 compute-0 sudo[135722]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:53 compute-0 sudo[135816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enjnjnbfcpctylmllxgtpsilofgtlqjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939552.818731-1812-80432750577882/AnsiballZ_file.py'
Oct 08 16:05:53 compute-0 sudo[135816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:53 compute-0 python3.9[135818]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:05:53 compute-0 sudo[135816]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:54 compute-0 sudo[135968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrbazxsapjxsjaabpsiunnehwnswvedk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939554.1259427-1838-132608741269921/AnsiballZ_command.py'
Oct 08 16:05:54 compute-0 sudo[135968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:54 compute-0 python3.9[135970]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 16:05:54 compute-0 sudo[135968]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:55 compute-0 sudo[136121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fblvebntppcrvvqyavuiabduhrbiqsoj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759939554.9507406-1854-39580961078219/AnsiballZ_edpm_nftables_from_files.py'
Oct 08 16:05:55 compute-0 sudo[136121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:55 compute-0 python3[136123]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 08 16:05:55 compute-0 sudo[136121]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:56 compute-0 sudo[136273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spohvuynqvfvfmnklgvxypmpjpbirulk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939555.81268-1870-61842175651007/AnsiballZ_stat.py'
Oct 08 16:05:56 compute-0 sudo[136273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:56 compute-0 python3.9[136275]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:05:56 compute-0 sudo[136273]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:56 compute-0 sudo[136351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loeralhabhnddnunefxusonstmbaxpos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939555.81268-1870-61842175651007/AnsiballZ_file.py'
Oct 08 16:05:56 compute-0 sudo[136351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:56 compute-0 python3.9[136353]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:05:56 compute-0 sudo[136351]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:57 compute-0 sudo[136503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rphqksihdpnedocnjsatapgtylegdvgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939557.0984757-1894-68949139062832/AnsiballZ_stat.py'
Oct 08 16:05:57 compute-0 sudo[136503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:57 compute-0 python3.9[136505]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:05:57 compute-0 sudo[136503]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:57 compute-0 sudo[136581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnkyklbkiuzmwzxzhbcwwlkcwbyugrhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939557.0984757-1894-68949139062832/AnsiballZ_file.py'
Oct 08 16:05:57 compute-0 sudo[136581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:58 compute-0 python3.9[136583]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:05:58 compute-0 sudo[136581]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:58 compute-0 sudo[136733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btjqmddwnvmebfioqdgotonlqrawmrvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939558.3838475-1918-76743082751458/AnsiballZ_stat.py'
Oct 08 16:05:58 compute-0 sudo[136733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:58 compute-0 python3.9[136735]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:05:58 compute-0 sudo[136733]: pam_unix(sudo:session): session closed for user root
Oct 08 16:05:59 compute-0 sudo[136811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpodeulsplhoyzwibaahslbwgavqjsqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939558.3838475-1918-76743082751458/AnsiballZ_file.py'
Oct 08 16:05:59 compute-0 sudo[136811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:05:59 compute-0 python3.9[136813]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:05:59 compute-0 sudo[136811]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:00 compute-0 sudo[136963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzcgoxviyqocfyqmwtxdbsaqyokcbbvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939559.6888402-1942-59537629644387/AnsiballZ_stat.py'
Oct 08 16:06:00 compute-0 sudo[136963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:00 compute-0 python3.9[136965]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:06:00 compute-0 sudo[136963]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:00 compute-0 sudo[137041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkdixytsargdurauhuducuwtoybjmulp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939559.6888402-1942-59537629644387/AnsiballZ_file.py'
Oct 08 16:06:00 compute-0 sudo[137041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:00 compute-0 python3.9[137043]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:06:00 compute-0 sudo[137041]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:01 compute-0 sudo[137193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvefizkjwkimbwzbqwfxhumepirnrjng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939561.067993-1966-152510001935506/AnsiballZ_stat.py'
Oct 08 16:06:01 compute-0 sudo[137193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:01 compute-0 python3.9[137195]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 08 16:06:01 compute-0 sudo[137193]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:02 compute-0 sudo[137318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzucbsvrlpiijnhebfahtvymnjqxhlyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939561.067993-1966-152510001935506/AnsiballZ_copy.py'
Oct 08 16:06:02 compute-0 sudo[137318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:02 compute-0 python3.9[137320]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759939561.067993-1966-152510001935506/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:06:02 compute-0 sudo[137318]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:02 compute-0 sudo[137470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxuelysqjjycoqubnmpipdcdqtkvcios ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939562.631968-1996-265067117927661/AnsiballZ_file.py'
Oct 08 16:06:02 compute-0 sudo[137470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:03 compute-0 python3.9[137472]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:06:03 compute-0 sudo[137470]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:03 compute-0 sudo[137635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmgbucjtfmxqgelqifytkjughvzagznf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939563.3411412-2012-165681226817147/AnsiballZ_command.py'
Oct 08 16:06:03 compute-0 sudo[137635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:03 compute-0 podman[137596]: 2025-10-08 16:06:03.695941198 +0000 UTC m=+0.070242154 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=multipathd, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:06:03 compute-0 python3.9[137645]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 16:06:03 compute-0 sudo[137635]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:04 compute-0 sudo[137798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqsuimdiywunbrtagjgxphzhkgnstide ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939564.1436503-2028-194379541438674/AnsiballZ_blockinfile.py'
Oct 08 16:06:04 compute-0 sudo[137798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:04 compute-0 python3.9[137800]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:06:04 compute-0 sudo[137798]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:05 compute-0 sudo[137950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsiozcnxazsdqfbnisjzdtuzrvhverqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939565.13188-2046-224343244284019/AnsiballZ_command.py'
Oct 08 16:06:05 compute-0 sudo[137950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:05 compute-0 python3.9[137952]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 16:06:05 compute-0 sudo[137950]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:06 compute-0 sudo[138103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlhcddzshbjcridfvzmcgnajxqsbjcod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939565.8853352-2062-86958166462764/AnsiballZ_stat.py'
Oct 08 16:06:06 compute-0 sudo[138103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:06 compute-0 python3.9[138105]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 08 16:06:06 compute-0 sudo[138103]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:07 compute-0 sudo[138274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpnsksjfpbliaxphznsmtlnsvarwboif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939566.8297448-2078-234761818930/AnsiballZ_command.py'
Oct 08 16:06:07 compute-0 sudo[138274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:07 compute-0 podman[138231]: 2025-10-08 16:06:07.174675671 +0000 UTC m=+0.071456546 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-type=git, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, architecture=x86_64, version=9.6, name=ubi9-minimal, build-date=2025-08-20T13:12:41)
Oct 08 16:06:07 compute-0 python3.9[138280]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 16:06:07 compute-0 sudo[138274]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:07 compute-0 sudo[138434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqjujinktgnuqsuzejehgzuhfuxpyjee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759939567.5987694-2094-186526120232602/AnsiballZ_file.py'
Oct 08 16:06:07 compute-0 sudo[138434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:08 compute-0 python3.9[138436]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:06:08 compute-0 sudo[138434]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:08 compute-0 openstack_network_exporter[130039]: ERROR   16:06:08 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:06:08 compute-0 openstack_network_exporter[130039]: ERROR   16:06:08 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:06:08 compute-0 openstack_network_exporter[130039]: ERROR   16:06:08 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:06:08 compute-0 openstack_network_exporter[130039]: ERROR   16:06:08 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:06:08 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:06:08 compute-0 openstack_network_exporter[130039]: ERROR   16:06:08 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:06:08 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:06:08 compute-0 sshd-session[117809]: Connection closed by 192.168.122.30 port 40074
Oct 08 16:06:08 compute-0 sshd-session[117806]: pam_unix(sshd:session): session closed for user zuul
Oct 08 16:06:08 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Oct 08 16:06:08 compute-0 systemd[1]: session-11.scope: Consumed 1min 37.580s CPU time.
Oct 08 16:06:08 compute-0 systemd-logind[847]: Session 11 logged out. Waiting for processes to exit.
Oct 08 16:06:08 compute-0 systemd-logind[847]: Removed session 11.
Oct 08 16:06:13 compute-0 unix_chkpwd[138488]: password check failed for user (root)
Oct 08 16:06:13 compute-0 sshd-session[138466]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 08 16:06:13 compute-0 podman[138468]: 2025-10-08 16:06:13.467086677 +0000 UTC m=+0.074993069 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=iscsid, container_name=iscsid)
Oct 08 16:06:14 compute-0 sshd-session[138490]: Accepted publickey for zuul from 38.102.83.2 port 45858 ssh2: RSA SHA256:k3KmgR8zel5b9aI7VwI/51HIdQflGsx5vKCeah++Ej4
Oct 08 16:06:14 compute-0 systemd-logind[847]: New session 12 of user zuul.
Oct 08 16:06:14 compute-0 systemd[1]: Started Session 12 of User zuul.
Oct 08 16:06:14 compute-0 sshd-session[138490]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 16:06:14 compute-0 sudo[138517]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fttqczqobemtrldplztshqjoekmmajgh ; /usr/bin/python3'
Oct 08 16:06:14 compute-0 sudo[138517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:14 compute-0 python3[138519]: ansible-ansible.legacy.dnf Invoked with name=['nfs-utils', 'iptables'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None state=None
Oct 08 16:06:14 compute-0 sshd-session[138466]: Failed password for root from 80.94.93.233 port 31010 ssh2
Oct 08 16:06:15 compute-0 unix_chkpwd[138521]: password check failed for user (root)
Oct 08 16:06:15 compute-0 sudo[138517]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:16 compute-0 sudo[138545]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtlopyfbrhryfrhkquuidwjzqtlwwqbk ; /usr/bin/python3'
Oct 08 16:06:16 compute-0 sudo[138545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:16 compute-0 python3[138547]: ansible-community.general.ini_file Invoked with path=/etc/nfs.conf section=nfsd option=vers3 value=n backup=True mode=0644 state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:06:16 compute-0 sudo[138545]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:16 compute-0 sudo[138573]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lchcvamtnjdysyyrskzsooorgrrzcfqq ; /usr/bin/python3'
Oct 08 16:06:16 compute-0 sudo[138573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:17 compute-0 python3[138575]: ansible-ansible.builtin.systemd_service Invoked with name=rpc-statd.service masked=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None
Oct 08 16:06:17 compute-0 systemd[1]: Reloading.
Oct 08 16:06:17 compute-0 systemd-sysv-generator[138607]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:06:17 compute-0 systemd-rc-local-generator[138600]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:06:17 compute-0 sudo[138573]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:17 compute-0 sudo[138653]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mblaijzrovtbkkpscnyjzvwdwavyhyva ; /usr/bin/python3'
Oct 08 16:06:17 compute-0 podman[138613]: 2025-10-08 16:06:17.609746357 +0000 UTC m=+0.094579885 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:06:17 compute-0 sudo[138653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:17 compute-0 python3[138658]: ansible-ansible.builtin.systemd_service Invoked with name=rpcbind.service masked=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None
Oct 08 16:06:17 compute-0 systemd[1]: Reloading.
Oct 08 16:06:18 compute-0 systemd-rc-local-generator[138689]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:06:18 compute-0 systemd-sysv-generator[138692]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:06:18 compute-0 sshd-session[138466]: Failed password for root from 80.94.93.233 port 31010 ssh2
Oct 08 16:06:18 compute-0 systemd[1]: rpcbind.service: Current command vanished from the unit file, execution of the command list won't be resumed.
Oct 08 16:06:18 compute-0 sudo[138653]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:18 compute-0 sudo[138719]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqocwxoqcbaaipyjogornglpgyoqaurc ; /usr/bin/python3'
Oct 08 16:06:18 compute-0 sudo[138719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:18 compute-0 python3[138721]: ansible-ansible.builtin.systemd_service Invoked with name=rpcbind.socket masked=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None
Oct 08 16:06:18 compute-0 systemd[1]: Reloading.
Oct 08 16:06:18 compute-0 systemd-sysv-generator[138749]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:06:18 compute-0 systemd-rc-local-generator[138746]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:06:18 compute-0 systemd[1]: rpcbind.socket: Socket unit configuration has changed while unit has been running, no open socket file descriptor left. The socket unit is not functional until restarted.
Oct 08 16:06:19 compute-0 sudo[138719]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:19 compute-0 sudo[138782]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njyxgdmzmlgqaadnhyxiuvcuffzazzdl ; /usr/bin/python3'
Oct 08 16:06:19 compute-0 sudo[138782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:19 compute-0 unix_chkpwd[138785]: password check failed for user (root)
Oct 08 16:06:19 compute-0 python3[138784]: ansible-ansible.builtin.file Invoked with path=/data/cinder_backend_1 state=directory mode=755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:06:19 compute-0 sudo[138782]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:19 compute-0 sudo[138809]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgojcwwanxzblnceerzbrkfjpqebcppp ; /usr/bin/python3'
Oct 08 16:06:19 compute-0 sudo[138809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:19 compute-0 python3[138811]: ansible-ansible.builtin.file Invoked with path=/data/cinder_backend_2 state=directory mode=755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:06:19 compute-0 sudo[138809]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:19 compute-0 sudo[138835]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-webkpnqtxwhbqbevbeeqfiifkrkbejoq ; /usr/bin/python3'
Oct 08 16:06:19 compute-0 sudo[138835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:19 compute-0 python3[138837]: ansible-ansible.builtin.file Invoked with path=/data/cinderbackup state=directory mode=755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:06:20 compute-0 sudo[138835]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:21 compute-0 sshd-session[138466]: Failed password for root from 80.94.93.233 port 31010 ssh2
Oct 08 16:06:21 compute-0 sudo[138913]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgvndnueozcjmxdhwhuehwsyrragxguc ; /usr/bin/python3'
Oct 08 16:06:21 compute-0 sudo[138913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:22 compute-0 python3[138915]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/nfs-server.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 08 16:06:22 compute-0 sudo[138913]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:22 compute-0 sudo[138986]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlraksmrcyfdohiwsqkubsnpzuyuianr ; /usr/bin/python3'
Oct 08 16:06:22 compute-0 sudo[138986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:22 compute-0 python3[138988]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/nfs-server.nft mode=0666 src=/home/zuul/.ansible/tmp/ansible-tmp-1759939581.6669443-33293-76294373635392/source _original_basename=tmp1ncs4r7f follow=False checksum=f91e6a2e98f3d3c48705976f5b33f9e81e7cf7f4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:06:22 compute-0 sudo[138986]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:22 compute-0 sudo[139036]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alxyymeishlcccbzbzqcnoumyopnuucq ; /usr/bin/python3'
Oct 08 16:06:22 compute-0 sudo[139036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:22 compute-0 python3[139038]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/sysconfig/nftables.conf line=include "/etc/nftables/nfs-server.nft" insertafter=EOF state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:06:22 compute-0 sudo[139036]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:23 compute-0 sshd-session[138466]: Received disconnect from 80.94.93.233 port 31010:11:  [preauth]
Oct 08 16:06:23 compute-0 sshd-session[138466]: Disconnected from authenticating user root 80.94.93.233 port 31010 [preauth]
Oct 08 16:06:23 compute-0 sshd-session[138466]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 08 16:06:23 compute-0 sudo[139078]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wznwgldxeyuzjoiokbwssjpfflhwtdel ; /usr/bin/python3'
Oct 08 16:06:23 compute-0 sudo[139078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:23 compute-0 podman[139042]: 2025-10-08 16:06:23.532836698 +0000 UTC m=+0.119045522 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 08 16:06:23 compute-0 podman[139050]: 2025-10-08 16:06:23.593764799 +0000 UTC m=+0.178850480 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Oct 08 16:06:23 compute-0 python3[139085]: ansible-ansible.builtin.systemd Invoked with name=nftables state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 08 16:06:23 compute-0 systemd[1]: Stopping Netfilter Tables...
Oct 08 16:06:23 compute-0 systemd[1]: nftables.service: Deactivated successfully.
Oct 08 16:06:23 compute-0 systemd[1]: Stopped Netfilter Tables.
Oct 08 16:06:23 compute-0 systemd[1]: Starting Netfilter Tables...
Oct 08 16:06:23 compute-0 systemd[1]: Finished Netfilter Tables.
Oct 08 16:06:23 compute-0 sudo[139078]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:24 compute-0 unix_chkpwd[139118]: password check failed for user (root)
Oct 08 16:06:24 compute-0 sshd-session[139039]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 08 16:06:24 compute-0 sudo[139142]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clmwitpqoremwaqloijaaldxvrzkscek ; /usr/bin/python3'
Oct 08 16:06:24 compute-0 sudo[139142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:24 compute-0 python3[139144]: ansible-community.general.ini_file Invoked with path=/etc/nfs.conf section=nfsd option=host value=172.18.0.100 backup=True mode=0644 state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:06:24 compute-0 sudo[139142]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:24 compute-0 sudo[139170]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykugkqyfeejgixrqwowycntdkoonqxwh ; /usr/bin/python3'
Oct 08 16:06:24 compute-0 sudo[139170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:24 compute-0 python3[139172]: ansible-ansible.builtin.systemd Invoked with name=nfs-server state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 08 16:06:24 compute-0 systemd[1]: Reloading.
Oct 08 16:06:24 compute-0 systemd-sysv-generator[139203]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 08 16:06:24 compute-0 systemd-rc-local-generator[139200]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 08 16:06:25 compute-0 systemd[1]: rpcbind.socket: Socket unit configuration has changed while unit has been running, no open socket file descriptor left. The socket unit is not functional until restarted.
Oct 08 16:06:25 compute-0 systemd[1]: Mounting NFSD configuration filesystem...
Oct 08 16:06:25 compute-0 systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 08 16:06:25 compute-0 systemd[1]: Starting NFSv4 ID-name mapping service...
Oct 08 16:06:25 compute-0 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 08 16:06:25 compute-0 rpc.idmapd[139214]: Setting log level to 0
Oct 08 16:06:25 compute-0 systemd[1]: Started NFSv4 ID-name mapping service.
Oct 08 16:06:25 compute-0 systemd[1]: Mounted NFSD configuration filesystem.
Oct 08 16:06:25 compute-0 systemd[1]: Starting NFS Mount Daemon...
Oct 08 16:06:25 compute-0 systemd[1]: Starting NFSv4 Client Tracking Daemon...
Oct 08 16:06:25 compute-0 systemd[1]: Started NFSv4 Client Tracking Daemon.
Oct 08 16:06:25 compute-0 rpc.mountd[139221]: Version 2.5.4 starting
Oct 08 16:06:25 compute-0 systemd[1]: Started NFS Mount Daemon.
Oct 08 16:06:25 compute-0 systemd[1]: Starting NFS server and services...
Oct 08 16:06:25 compute-0 kernel: RPC: Registered rdma transport module.
Oct 08 16:06:25 compute-0 kernel: RPC: Registered rdma backchannel transport module.
Oct 08 16:06:25 compute-0 sshd-session[139039]: Failed password for root from 80.94.93.233 port 29568 ssh2
Oct 08 16:06:25 compute-0 kernel: NFSD: Using nfsdcld client tracking operations.
Oct 08 16:06:25 compute-0 kernel: NFSD: no clients to reclaim, skipping NFSv4 grace period (net f0000000)
Oct 08 16:06:25 compute-0 systemd[1]: Reloading GSSAPI Proxy Daemon...
Oct 08 16:06:25 compute-0 systemd[1]: Reloaded GSSAPI Proxy Daemon.
Oct 08 16:06:25 compute-0 systemd[1]: Finished NFS server and services.
Oct 08 16:06:25 compute-0 sudo[139170]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:25 compute-0 sudo[139263]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwifkguaadcgfapubmveuxmaruaoakkr ; /usr/bin/python3'
Oct 08 16:06:25 compute-0 sudo[139263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:25 compute-0 unix_chkpwd[139266]: password check failed for user (root)
Oct 08 16:06:26 compute-0 python3[139265]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/exports line=/data/cinder_backend_1 172.18.0.0/24(rw,sync,no_root_squash) state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:06:26 compute-0 sudo[139263]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:26 compute-0 sudo[139290]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpxshpuwycrjedsyziixunluleabbeqx ; /usr/bin/python3'
Oct 08 16:06:26 compute-0 sudo[139290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:26 compute-0 python3[139292]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/exports line=/data/cinder_backend_2 172.18.0.0/24(rw,sync,no_root_squash) state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:06:26 compute-0 sudo[139290]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:26 compute-0 sudo[139316]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngzqowthnacdgjlixyldeavkupjwnmqo ; /usr/bin/python3'
Oct 08 16:06:26 compute-0 sudo[139316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:26 compute-0 python3[139318]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/exports line=/data/cinderbackup 172.18.0.0/24(rw,sync,no_root_squash) state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 08 16:06:26 compute-0 sudo[139316]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:26 compute-0 sudo[139342]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oszyvqdfcnnhsdxlozodbrotupzmqakr ; /usr/bin/python3'
Oct 08 16:06:26 compute-0 sudo[139342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:06:26 compute-0 python3[139344]: ansible-ansible.legacy.command Invoked with _raw_params=exportfs -a _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 08 16:06:27 compute-0 sudo[139342]: pam_unix(sudo:session): session closed for user root
Oct 08 16:06:27 compute-0 sshd-session[139039]: Failed password for root from 80.94.93.233 port 29568 ssh2
Oct 08 16:06:27 compute-0 unix_chkpwd[139346]: password check failed for user (root)
Oct 08 16:06:29 compute-0 podman[127881]: time="2025-10-08T16:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:06:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:06:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2999 "" "Go-http-client/1.1"
Oct 08 16:06:30 compute-0 sshd-session[139039]: Failed password for root from 80.94.93.233 port 29568 ssh2
Oct 08 16:06:31 compute-0 openstack_network_exporter[130039]: ERROR   16:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:06:31 compute-0 openstack_network_exporter[130039]: ERROR   16:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:06:31 compute-0 openstack_network_exporter[130039]: ERROR   16:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:06:31 compute-0 openstack_network_exporter[130039]: ERROR   16:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:06:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:06:31 compute-0 openstack_network_exporter[130039]: ERROR   16:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:06:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:06:31 compute-0 sshd-session[139039]: Received disconnect from 80.94.93.233 port 29568:11:  [preauth]
Oct 08 16:06:31 compute-0 sshd-session[139039]: Disconnected from authenticating user root 80.94.93.233 port 29568 [preauth]
Oct 08 16:06:31 compute-0 sshd-session[139039]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 08 16:06:32 compute-0 unix_chkpwd[139349]: password check failed for user (root)
Oct 08 16:06:32 compute-0 sshd-session[139347]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 08 16:06:34 compute-0 podman[139350]: 2025-10-08 16:06:34.501409485 +0000 UTC m=+0.090652341 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 08 16:06:34 compute-0 sshd-session[139347]: Failed password for root from 80.94.93.233 port 21950 ssh2
Oct 08 16:06:36 compute-0 unix_chkpwd[139370]: password check failed for user (root)
Oct 08 16:06:37 compute-0 podman[139371]: 2025-10-08 16:06:37.493480897 +0000 UTC m=+0.091256968 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm)
Oct 08 16:06:37 compute-0 sshd-session[139347]: Failed password for root from 80.94.93.233 port 21950 ssh2
Oct 08 16:06:38 compute-0 unix_chkpwd[139392]: password check failed for user (root)
Oct 08 16:06:41 compute-0 sshd-session[139347]: Failed password for root from 80.94.93.233 port 21950 ssh2
Oct 08 16:06:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:06:41.867 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:06:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:06:41.868 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:06:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:06:41.868 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:06:42 compute-0 sshd-session[139347]: Received disconnect from 80.94.93.233 port 21950:11:  [preauth]
Oct 08 16:06:42 compute-0 sshd-session[139347]: Disconnected from authenticating user root 80.94.93.233 port 21950 [preauth]
Oct 08 16:06:42 compute-0 sshd-session[139347]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 08 16:06:44 compute-0 podman[139394]: 2025-10-08 16:06:44.487284266 +0000 UTC m=+0.083157864 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 08 16:06:48 compute-0 podman[139414]: 2025-10-08 16:06:48.498159177 +0000 UTC m=+0.095308416 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS)
Oct 08 16:06:49 compute-0 nova_compute[117413]: 2025-10-08 16:06:49.528 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:06:49 compute-0 nova_compute[117413]: 2025-10-08 16:06:49.528 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:06:49 compute-0 nova_compute[117413]: 2025-10-08 16:06:49.529 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:06:49 compute-0 nova_compute[117413]: 2025-10-08 16:06:49.529 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:06:49 compute-0 nova_compute[117413]: 2025-10-08 16:06:49.529 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:06:49 compute-0 nova_compute[117413]: 2025-10-08 16:06:49.529 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:06:49 compute-0 nova_compute[117413]: 2025-10-08 16:06:49.529 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:06:49 compute-0 nova_compute[117413]: 2025-10-08 16:06:49.529 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:06:49 compute-0 nova_compute[117413]: 2025-10-08 16:06:49.529 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:06:50 compute-0 nova_compute[117413]: 2025-10-08 16:06:50.141 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:06:50 compute-0 nova_compute[117413]: 2025-10-08 16:06:50.141 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:06:50 compute-0 nova_compute[117413]: 2025-10-08 16:06:50.142 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:06:50 compute-0 nova_compute[117413]: 2025-10-08 16:06:50.142 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:06:50 compute-0 nova_compute[117413]: 2025-10-08 16:06:50.303 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:06:50 compute-0 nova_compute[117413]: 2025-10-08 16:06:50.304 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:06:50 compute-0 nova_compute[117413]: 2025-10-08 16:06:50.345 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.041s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:06:50 compute-0 nova_compute[117413]: 2025-10-08 16:06:50.346 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6480MB free_disk=73.30515670776367GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:06:50 compute-0 nova_compute[117413]: 2025-10-08 16:06:50.346 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:06:50 compute-0 nova_compute[117413]: 2025-10-08 16:06:50.347 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:06:51 compute-0 nova_compute[117413]: 2025-10-08 16:06:51.391 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:06:51 compute-0 nova_compute[117413]: 2025-10-08 16:06:51.392 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:06:50 up 14 min,  0 user,  load average: 0.40, 0.68, 0.47\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:06:51 compute-0 nova_compute[117413]: 2025-10-08 16:06:51.411 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:06:51 compute-0 nova_compute[117413]: 2025-10-08 16:06:51.920 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:06:52 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Oct 08 16:06:52 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 08 16:06:52 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Oct 08 16:06:52 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 08 16:06:52 compute-0 nova_compute[117413]: 2025-10-08 16:06:52.429 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:06:52 compute-0 nova_compute[117413]: 2025-10-08 16:06:52.430 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.084s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:06:54 compute-0 podman[139444]: 2025-10-08 16:06:54.504945668 +0000 UTC m=+0.103043399 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:06:54 compute-0 podman[139445]: 2025-10-08 16:06:54.535981576 +0000 UTC m=+0.122720239 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:06:59 compute-0 podman[127881]: time="2025-10-08T16:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:06:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:06:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3003 "" "Go-http-client/1.1"
Oct 08 16:07:01 compute-0 openstack_network_exporter[130039]: ERROR   16:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:07:01 compute-0 openstack_network_exporter[130039]: ERROR   16:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:07:01 compute-0 openstack_network_exporter[130039]: ERROR   16:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:07:01 compute-0 openstack_network_exporter[130039]: ERROR   16:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:07:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:07:01 compute-0 openstack_network_exporter[130039]: ERROR   16:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:07:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:07:05 compute-0 podman[139496]: 2025-10-08 16:07:05.45122942 +0000 UTC m=+0.058650557 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 08 16:07:08 compute-0 podman[139517]: 2025-10-08 16:07:08.49927355 +0000 UTC m=+0.089618541 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc.)
Oct 08 16:07:15 compute-0 podman[139539]: 2025-10-08 16:07:15.488101753 +0000 UTC m=+0.092274687 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:07:19 compute-0 podman[139559]: 2025-10-08 16:07:19.471291204 +0000 UTC m=+0.085536683 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:07:22 compute-0 PackageKit[55591]: daemon quit
Oct 08 16:07:22 compute-0 systemd[1]: packagekit.service: Deactivated successfully.
Oct 08 16:07:25 compute-0 podman[139578]: 2025-10-08 16:07:25.454925228 +0000 UTC m=+0.057964296 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:07:25 compute-0 podman[139579]: 2025-10-08 16:07:25.48682948 +0000 UTC m=+0.088073486 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.4)
Oct 08 16:07:29 compute-0 podman[127881]: time="2025-10-08T16:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:07:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:07:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3004 "" "Go-http-client/1.1"
Oct 08 16:07:31 compute-0 openstack_network_exporter[130039]: ERROR   16:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:07:31 compute-0 openstack_network_exporter[130039]: ERROR   16:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:07:31 compute-0 openstack_network_exporter[130039]: ERROR   16:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:07:31 compute-0 openstack_network_exporter[130039]: ERROR   16:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:07:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:07:31 compute-0 openstack_network_exporter[130039]: ERROR   16:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:07:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:07:36 compute-0 podman[139626]: 2025-10-08 16:07:36.464687225 +0000 UTC m=+0.075335478 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 08 16:07:39 compute-0 podman[139647]: 2025-10-08 16:07:39.458936032 +0000 UTC m=+0.065656958 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, release=1755695350, config_id=edpm, version=9.6, architecture=x86_64, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 08 16:07:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:07:41.869 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:07:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:07:41.870 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:07:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:07:41.870 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:07:46 compute-0 podman[139669]: 2025-10-08 16:07:46.470493346 +0000 UTC m=+0.070993752 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, container_name=iscsid)
Oct 08 16:07:48 compute-0 nova_compute[117413]: 2025-10-08 16:07:48.266 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:07:48 compute-0 nova_compute[117413]: 2025-10-08 16:07:48.266 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:07:48 compute-0 nova_compute[117413]: 2025-10-08 16:07:48.793 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:07:48 compute-0 nova_compute[117413]: 2025-10-08 16:07:48.793 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:07:48 compute-0 nova_compute[117413]: 2025-10-08 16:07:48.793 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:07:48 compute-0 nova_compute[117413]: 2025-10-08 16:07:48.793 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:07:48 compute-0 nova_compute[117413]: 2025-10-08 16:07:48.794 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:07:48 compute-0 nova_compute[117413]: 2025-10-08 16:07:48.794 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:07:48 compute-0 nova_compute[117413]: 2025-10-08 16:07:48.794 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:07:49 compute-0 nova_compute[117413]: 2025-10-08 16:07:49.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:07:49 compute-0 nova_compute[117413]: 2025-10-08 16:07:49.877 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:07:49 compute-0 nova_compute[117413]: 2025-10-08 16:07:49.878 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:07:49 compute-0 nova_compute[117413]: 2025-10-08 16:07:49.878 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:07:49 compute-0 nova_compute[117413]: 2025-10-08 16:07:49.878 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:07:50 compute-0 nova_compute[117413]: 2025-10-08 16:07:50.025 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:07:50 compute-0 nova_compute[117413]: 2025-10-08 16:07:50.026 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:07:50 compute-0 nova_compute[117413]: 2025-10-08 16:07:50.045 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.019s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:07:50 compute-0 nova_compute[117413]: 2025-10-08 16:07:50.046 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6501MB free_disk=73.30515670776367GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:07:50 compute-0 nova_compute[117413]: 2025-10-08 16:07:50.046 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:07:50 compute-0 nova_compute[117413]: 2025-10-08 16:07:50.047 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:07:50 compute-0 podman[139690]: 2025-10-08 16:07:50.48117836 +0000 UTC m=+0.073909856 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 08 16:07:51 compute-0 nova_compute[117413]: 2025-10-08 16:07:51.218 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:07:51 compute-0 nova_compute[117413]: 2025-10-08 16:07:51.220 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:07:50 up 15 min,  0 user,  load average: 0.21, 0.57, 0.44\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:07:51 compute-0 nova_compute[117413]: 2025-10-08 16:07:51.256 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:07:51 compute-0 nova_compute[117413]: 2025-10-08 16:07:51.861 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:07:52 compute-0 nova_compute[117413]: 2025-10-08 16:07:52.873 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:07:52 compute-0 nova_compute[117413]: 2025-10-08 16:07:52.874 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.827s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:07:56 compute-0 podman[139709]: 2025-10-08 16:07:56.461960823 +0000 UTC m=+0.064380861 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 08 16:07:56 compute-0 podman[139710]: 2025-10-08 16:07:56.488538701 +0000 UTC m=+0.085848912 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 08 16:08:07 compute-0 podman[139758]: 2025-10-08 16:08:07.470631029 +0000 UTC m=+0.071820276 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 08 16:08:10 compute-0 podman[139778]: 2025-10-08 16:08:10.44316679 +0000 UTC m=+0.056711700 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, version=9.6, architecture=x86_64, config_id=edpm, distribution-scope=public, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 08 16:08:11 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:11.252 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:08:11 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:11.253 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:08:11 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:11.254 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:08:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:15.298 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:30:4d 192.168.122.171'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.122.171/24', 'neutron:device_id': 'ovnmeta-a9c464ed-c4fd-4561-b4e2-9503a1b6fd7e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a9c464ed-c4fd-4561-b4e2-9503a1b6fd7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b2eb43725f1e4dbfa51aeb475eac607e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=732149f0-dd19-4622-a99c-62953f587e1d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7ecba73e-0ec4-44f5-9269-5fe2d72c0454) old=Port_Binding(mac=['fa:16:3e:c3:30:4d'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-a9c464ed-c4fd-4561-b4e2-9503a1b6fd7e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a9c464ed-c4fd-4561-b4e2-9503a1b6fd7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b2eb43725f1e4dbfa51aeb475eac607e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:08:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:15.300 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 7ecba73e-0ec4-44f5-9269-5fe2d72c0454 in datapath a9c464ed-c4fd-4561-b4e2-9503a1b6fd7e updated
Oct 08 16:08:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:15.302 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a9c464ed-c4fd-4561-b4e2-9503a1b6fd7e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:08:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:15.303 28633 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpsfbdpu9m/privsep.sock']
Oct 08 16:08:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:16.097 28633 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 08 16:08:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:16.097 28633 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpsfbdpu9m/privsep.sock __init__ /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:377
Oct 08 16:08:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:15.930 139805 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 08 16:08:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:15.934 139805 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 08 16:08:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:15.935 139805 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Oct 08 16:08:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:15.935 139805 INFO oslo.privsep.daemon [-] privsep daemon running as pid 139805
Oct 08 16:08:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:16.099 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[d14ad4cc-384e-4b06-a99a-39231c3f2cb4]: (2,) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:08:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:16.559 139805 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:08:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:16.559 139805 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:08:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:16.559 139805 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:08:17 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:17.028 139805 INFO oslo_service.backend [-] Loading backend: eventlet
Oct 08 16:08:17 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:17.033 139805 INFO oslo_service.backend [-] Backend 'eventlet' successfully loaded and cached.
Oct 08 16:08:17 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:17.069 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[4a8cb208-e79d-4bfd-88d2-e5e8bdf2a25a]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:08:17 compute-0 podman[139810]: 2025-10-08 16:08:17.469129937 +0000 UTC m=+0.078685001 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 08 16:08:21 compute-0 podman[139831]: 2025-10-08 16:08:21.440559606 +0000 UTC m=+0.053030001 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:08:27 compute-0 podman[139851]: 2025-10-08 16:08:27.470986487 +0000 UTC m=+0.071379091 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:08:27 compute-0 podman[139852]: 2025-10-08 16:08:27.502956599 +0000 UTC m=+0.111128667 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Oct 08 16:08:29 compute-0 podman[127881]: time="2025-10-08T16:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:08:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:08:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3006 "" "Go-http-client/1.1"
Oct 08 16:08:31 compute-0 openstack_network_exporter[130039]: ERROR   16:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:08:31 compute-0 openstack_network_exporter[130039]: ERROR   16:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:08:31 compute-0 openstack_network_exporter[130039]: ERROR   16:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:08:31 compute-0 openstack_network_exporter[130039]: ERROR   16:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:08:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:08:31 compute-0 openstack_network_exporter[130039]: ERROR   16:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:08:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:08:38 compute-0 podman[139904]: 2025-10-08 16:08:38.469742017 +0000 UTC m=+0.067378805 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=multipathd)
Oct 08 16:08:41 compute-0 podman[139925]: 2025-10-08 16:08:41.468786904 +0000 UTC m=+0.081486971 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, config_id=edpm, managed_by=edpm_ansible)
Oct 08 16:08:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:41.871 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:08:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:41.871 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:08:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:08:41.871 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:08:43 compute-0 nova_compute[117413]: 2025-10-08 16:08:43.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:08:43 compute-0 nova_compute[117413]: 2025-10-08 16:08:43.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Oct 08 16:08:45 compute-0 nova_compute[117413]: 2025-10-08 16:08:45.584 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Oct 08 16:08:45 compute-0 nova_compute[117413]: 2025-10-08 16:08:45.585 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:08:45 compute-0 nova_compute[117413]: 2025-10-08 16:08:45.585 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Oct 08 16:08:46 compute-0 nova_compute[117413]: 2025-10-08 16:08:46.327 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:08:48 compute-0 podman[139947]: 2025-10-08 16:08:48.462831124 +0000 UTC m=+0.076043245 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:08:49 compute-0 nova_compute[117413]: 2025-10-08 16:08:49.863 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:08:49 compute-0 nova_compute[117413]: 2025-10-08 16:08:49.863 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:08:49 compute-0 nova_compute[117413]: 2025-10-08 16:08:49.863 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:08:49 compute-0 nova_compute[117413]: 2025-10-08 16:08:49.863 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:08:49 compute-0 nova_compute[117413]: 2025-10-08 16:08:49.863 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:08:49 compute-0 nova_compute[117413]: 2025-10-08 16:08:49.864 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:08:49 compute-0 nova_compute[117413]: 2025-10-08 16:08:49.864 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:08:49 compute-0 nova_compute[117413]: 2025-10-08 16:08:49.864 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:08:50 compute-0 nova_compute[117413]: 2025-10-08 16:08:50.506 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:08:50 compute-0 nova_compute[117413]: 2025-10-08 16:08:50.507 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:08:50 compute-0 nova_compute[117413]: 2025-10-08 16:08:50.507 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:08:50 compute-0 nova_compute[117413]: 2025-10-08 16:08:50.507 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:08:50 compute-0 nova_compute[117413]: 2025-10-08 16:08:50.636 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:08:50 compute-0 nova_compute[117413]: 2025-10-08 16:08:50.638 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:08:50 compute-0 nova_compute[117413]: 2025-10-08 16:08:50.656 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.018s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:08:50 compute-0 nova_compute[117413]: 2025-10-08 16:08:50.657 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6370MB free_disk=73.30610275268555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:08:50 compute-0 nova_compute[117413]: 2025-10-08 16:08:50.657 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:08:50 compute-0 nova_compute[117413]: 2025-10-08 16:08:50.658 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:08:52 compute-0 nova_compute[117413]: 2025-10-08 16:08:52.007 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:08:52 compute-0 nova_compute[117413]: 2025-10-08 16:08:52.007 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:08:50 up 16 min,  0 user,  load average: 0.30, 0.53, 0.44\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:08:52 compute-0 nova_compute[117413]: 2025-10-08 16:08:52.031 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:08:52 compute-0 podman[139969]: 2025-10-08 16:08:52.453976623 +0000 UTC m=+0.064166512 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=watcher_latest)
Oct 08 16:08:52 compute-0 nova_compute[117413]: 2025-10-08 16:08:52.590 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:08:53 compute-0 nova_compute[117413]: 2025-10-08 16:08:53.397 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:08:53 compute-0 nova_compute[117413]: 2025-10-08 16:08:53.397 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.739s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:08:53 compute-0 nova_compute[117413]: 2025-10-08 16:08:53.897 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:08:58 compute-0 podman[139989]: 2025-10-08 16:08:58.470316176 +0000 UTC m=+0.077862897 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 08 16:08:58 compute-0 podman[139990]: 2025-10-08 16:08:58.514518021 +0000 UTC m=+0.114462443 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 08 16:08:59 compute-0 podman[127881]: time="2025-10-08T16:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:08:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:08:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3002 "" "Go-http-client/1.1"
Oct 08 16:09:01 compute-0 openstack_network_exporter[130039]: ERROR   16:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:09:01 compute-0 openstack_network_exporter[130039]: ERROR   16:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:09:01 compute-0 openstack_network_exporter[130039]: ERROR   16:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:09:01 compute-0 openstack_network_exporter[130039]: ERROR   16:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:09:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:09:01 compute-0 openstack_network_exporter[130039]: ERROR   16:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:09:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:09:09 compute-0 podman[140038]: 2025-10-08 16:09:09.503232873 +0000 UTC m=+0.106201585 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251007, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 08 16:09:12 compute-0 podman[140058]: 2025-10-08 16:09:12.498258915 +0000 UTC m=+0.092677305 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 08 16:09:19 compute-0 podman[140079]: 2025-10-08 16:09:19.462943198 +0000 UTC m=+0.064891473 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:09:23 compute-0 podman[140099]: 2025-10-08 16:09:23.48153268 +0000 UTC m=+0.088940496 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251007, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:09:29 compute-0 podman[140119]: 2025-10-08 16:09:29.458273112 +0000 UTC m=+0.063986561 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 08 16:09:29 compute-0 podman[140120]: 2025-10-08 16:09:29.516699014 +0000 UTC m=+0.112433570 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 08 16:09:29 compute-0 podman[127881]: time="2025-10-08T16:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:09:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:09:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3010 "" "Go-http-client/1.1"
Oct 08 16:09:31 compute-0 openstack_network_exporter[130039]: ERROR   16:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:09:31 compute-0 openstack_network_exporter[130039]: ERROR   16:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:09:31 compute-0 openstack_network_exporter[130039]: ERROR   16:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:09:31 compute-0 openstack_network_exporter[130039]: ERROR   16:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:09:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:09:31 compute-0 openstack_network_exporter[130039]: ERROR   16:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:09:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:09:40 compute-0 podman[140169]: 2025-10-08 16:09:40.454534585 +0000 UTC m=+0.065100852 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Oct 08 16:09:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:09:41.872 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:09:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:09:41.873 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:09:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:09:41.873 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:09:43 compute-0 podman[140189]: 2025-10-08 16:09:43.473396219 +0000 UTC m=+0.078052689 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, release=1755695350, architecture=x86_64, vcs-type=git, io.buildah.version=1.33.7, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc.)
Oct 08 16:09:47 compute-0 nova_compute[117413]: 2025-10-08 16:09:47.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:09:47 compute-0 nova_compute[117413]: 2025-10-08 16:09:47.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:09:47 compute-0 nova_compute[117413]: 2025-10-08 16:09:47.363 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:09:48 compute-0 nova_compute[117413]: 2025-10-08 16:09:48.359 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:09:48 compute-0 nova_compute[117413]: 2025-10-08 16:09:48.998 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:09:48 compute-0 nova_compute[117413]: 2025-10-08 16:09:48.998 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:09:50 compute-0 nova_compute[117413]: 2025-10-08 16:09:50.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:09:50 compute-0 nova_compute[117413]: 2025-10-08 16:09:50.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:09:50 compute-0 nova_compute[117413]: 2025-10-08 16:09:50.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:09:50 compute-0 nova_compute[117413]: 2025-10-08 16:09:50.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:09:50 compute-0 podman[140210]: 2025-10-08 16:09:50.460620549 +0000 UTC m=+0.067808279 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible)
Oct 08 16:09:50 compute-0 nova_compute[117413]: 2025-10-08 16:09:50.876 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:09:50 compute-0 nova_compute[117413]: 2025-10-08 16:09:50.877 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:09:50 compute-0 nova_compute[117413]: 2025-10-08 16:09:50.877 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:09:50 compute-0 nova_compute[117413]: 2025-10-08 16:09:50.877 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:09:51 compute-0 nova_compute[117413]: 2025-10-08 16:09:51.035 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:09:51 compute-0 nova_compute[117413]: 2025-10-08 16:09:51.036 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:09:51 compute-0 nova_compute[117413]: 2025-10-08 16:09:51.059 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.023s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:09:51 compute-0 nova_compute[117413]: 2025-10-08 16:09:51.060 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6402MB free_disk=73.30610275268555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:09:51 compute-0 nova_compute[117413]: 2025-10-08 16:09:51.060 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:09:51 compute-0 nova_compute[117413]: 2025-10-08 16:09:51.060 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:09:52 compute-0 nova_compute[117413]: 2025-10-08 16:09:52.162 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:09:52 compute-0 nova_compute[117413]: 2025-10-08 16:09:52.162 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:09:51 up 17 min,  0 user,  load average: 0.11, 0.43, 0.41\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:09:52 compute-0 nova_compute[117413]: 2025-10-08 16:09:52.195 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing inventories for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Oct 08 16:09:52 compute-0 nova_compute[117413]: 2025-10-08 16:09:52.225 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating ProviderTree inventory for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Oct 08 16:09:52 compute-0 nova_compute[117413]: 2025-10-08 16:09:52.225 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating inventory in ProviderTree for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 08 16:09:52 compute-0 nova_compute[117413]: 2025-10-08 16:09:52.235 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing aggregate associations for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Oct 08 16:09:52 compute-0 nova_compute[117413]: 2025-10-08 16:09:52.250 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing trait associations for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8, traits: HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIRTIO_PACKED,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI2,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_SB16,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_ARCH_X86_64,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_SOUND_MODEL_ES1370,HW_ARCH_X86_64,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_VIRTIO_FS,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_TIS,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,HW_CPU_X86_MMX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_IGB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOUND_MODEL_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_SOUND_MODEL_AC97,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ADDRESS_SPACE_EMULATED,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOUND_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_CRB,HW_CPU_X86_SSE42,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_STORAGE_BUS_IDE _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Oct 08 16:09:52 compute-0 nova_compute[117413]: 2025-10-08 16:09:52.272 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:09:52 compute-0 nova_compute[117413]: 2025-10-08 16:09:52.897 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:09:53 compute-0 nova_compute[117413]: 2025-10-08 16:09:53.413 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:09:53 compute-0 nova_compute[117413]: 2025-10-08 16:09:53.413 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.353s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:09:54 compute-0 podman[140229]: 2025-10-08 16:09:54.454196806 +0000 UTC m=+0.056450287 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent)
Oct 08 16:09:59 compute-0 podman[127881]: time="2025-10-08T16:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:09:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:09:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3009 "" "Go-http-client/1.1"
Oct 08 16:09:59 compute-0 podman[140249]: 2025-10-08 16:09:59.83650712 +0000 UTC m=+0.056013615 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 08 16:09:59 compute-0 podman[140250]: 2025-10-08 16:09:59.869806432 +0000 UTC m=+0.087358062 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:10:01 compute-0 openstack_network_exporter[130039]: ERROR   16:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:10:01 compute-0 openstack_network_exporter[130039]: ERROR   16:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:10:01 compute-0 openstack_network_exporter[130039]: ERROR   16:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:10:01 compute-0 openstack_network_exporter[130039]: ERROR   16:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:10:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:10:01 compute-0 openstack_network_exporter[130039]: ERROR   16:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:10:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:10:11 compute-0 systemd[1]: Starting system activity accounting tool...
Oct 08 16:10:11 compute-0 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct 08 16:10:11 compute-0 systemd[1]: Finished system activity accounting tool.
Oct 08 16:10:11 compute-0 podman[140296]: 2025-10-08 16:10:11.484808444 +0000 UTC m=+0.089889763 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0)
Oct 08 16:10:14 compute-0 podman[140318]: 2025-10-08 16:10:14.489195039 +0000 UTC m=+0.090637715 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vendor=Red Hat, Inc., container_name=openstack_network_exporter, release=1755695350, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=minimal rhel9)
Oct 08 16:10:21 compute-0 podman[140339]: 2025-10-08 16:10:21.474440312 +0000 UTC m=+0.084923223 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 08 16:10:25 compute-0 podman[140359]: 2025-10-08 16:10:25.478428109 +0000 UTC m=+0.066748115 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 08 16:10:29 compute-0 podman[127881]: time="2025-10-08T16:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:10:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:10:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3007 "" "Go-http-client/1.1"
Oct 08 16:10:30 compute-0 podman[140378]: 2025-10-08 16:10:30.480437385 +0000 UTC m=+0.076153804 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 08 16:10:30 compute-0 podman[140379]: 2025-10-08 16:10:30.5247727 +0000 UTC m=+0.116292659 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:10:31 compute-0 openstack_network_exporter[130039]: ERROR   16:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:10:31 compute-0 openstack_network_exporter[130039]: ERROR   16:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:10:31 compute-0 openstack_network_exporter[130039]: ERROR   16:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:10:31 compute-0 openstack_network_exporter[130039]: ERROR   16:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:10:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:10:31 compute-0 openstack_network_exporter[130039]: ERROR   16:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:10:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:10:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:10:41.874 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:10:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:10:41.874 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:10:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:10:41.875 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:10:42 compute-0 podman[140430]: 2025-10-08 16:10:42.468129706 +0000 UTC m=+0.072220822 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 08 16:10:45 compute-0 podman[140450]: 2025-10-08 16:10:45.480612887 +0000 UTC m=+0.076545405 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6)
Oct 08 16:10:51 compute-0 nova_compute[117413]: 2025-10-08 16:10:51.414 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:10:51 compute-0 nova_compute[117413]: 2025-10-08 16:10:51.414 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:10:51 compute-0 nova_compute[117413]: 2025-10-08 16:10:51.415 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:10:51 compute-0 nova_compute[117413]: 2025-10-08 16:10:51.415 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:10:51 compute-0 nova_compute[117413]: 2025-10-08 16:10:51.415 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:10:51 compute-0 nova_compute[117413]: 2025-10-08 16:10:51.415 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:10:51 compute-0 nova_compute[117413]: 2025-10-08 16:10:51.415 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:10:51 compute-0 nova_compute[117413]: 2025-10-08 16:10:51.415 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:10:52 compute-0 podman[140472]: 2025-10-08 16:10:52.442198674 +0000 UTC m=+0.052530540 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 08 16:10:52 compute-0 nova_compute[117413]: 2025-10-08 16:10:52.485 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:10:52 compute-0 nova_compute[117413]: 2025-10-08 16:10:52.486 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:10:52 compute-0 nova_compute[117413]: 2025-10-08 16:10:52.486 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:10:52 compute-0 nova_compute[117413]: 2025-10-08 16:10:52.486 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:10:52 compute-0 nova_compute[117413]: 2025-10-08 16:10:52.651 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:10:52 compute-0 nova_compute[117413]: 2025-10-08 16:10:52.652 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:10:52 compute-0 nova_compute[117413]: 2025-10-08 16:10:52.669 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.017s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:10:52 compute-0 nova_compute[117413]: 2025-10-08 16:10:52.670 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6402MB free_disk=73.30610275268555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:10:52 compute-0 nova_compute[117413]: 2025-10-08 16:10:52.670 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:10:52 compute-0 nova_compute[117413]: 2025-10-08 16:10:52.670 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:10:53 compute-0 nova_compute[117413]: 2025-10-08 16:10:53.874 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:10:53 compute-0 nova_compute[117413]: 2025-10-08 16:10:53.874 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:10:52 up 19 min,  0 user,  load average: 0.03, 0.34, 0.38\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:10:53 compute-0 nova_compute[117413]: 2025-10-08 16:10:53.896 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:10:54 compute-0 nova_compute[117413]: 2025-10-08 16:10:54.461 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:10:55 compute-0 nova_compute[117413]: 2025-10-08 16:10:55.018 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:10:55 compute-0 nova_compute[117413]: 2025-10-08 16:10:55.018 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.348s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:10:55 compute-0 nova_compute[117413]: 2025-10-08 16:10:55.966 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:10:56 compute-0 podman[140494]: 2025-10-08 16:10:56.457522009 +0000 UTC m=+0.058405347 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 08 16:10:59 compute-0 podman[127881]: time="2025-10-08T16:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:10:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:10:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3009 "" "Go-http-client/1.1"
Oct 08 16:11:01 compute-0 openstack_network_exporter[130039]: ERROR   16:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:11:01 compute-0 openstack_network_exporter[130039]: ERROR   16:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:11:01 compute-0 openstack_network_exporter[130039]: ERROR   16:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:11:01 compute-0 openstack_network_exporter[130039]: ERROR   16:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:11:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:11:01 compute-0 openstack_network_exporter[130039]: ERROR   16:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:11:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:11:01 compute-0 podman[140512]: 2025-10-08 16:11:01.455320796 +0000 UTC m=+0.061491536 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:11:01 compute-0 podman[140513]: 2025-10-08 16:11:01.538968103 +0000 UTC m=+0.135312282 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251007, tcib_managed=true, config_id=ovn_controller)
Oct 08 16:11:13 compute-0 podman[140562]: 2025-10-08 16:11:13.451782295 +0000 UTC m=+0.062080782 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:11:16 compute-0 podman[140582]: 2025-10-08 16:11:16.446710255 +0000 UTC m=+0.055180325 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., version=9.6, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, release=1755695350, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 08 16:11:23 compute-0 podman[140603]: 2025-10-08 16:11:23.479102971 +0000 UTC m=+0.084850241 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 08 16:11:26 compute-0 sshd-session[138493]: Received disconnect from 38.102.83.2 port 45858:11: disconnected by user
Oct 08 16:11:26 compute-0 sshd-session[138493]: Disconnected from user zuul 38.102.83.2 port 45858
Oct 08 16:11:26 compute-0 sshd-session[138490]: pam_unix(sshd:session): session closed for user zuul
Oct 08 16:11:26 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Oct 08 16:11:26 compute-0 systemd[1]: session-12.scope: Consumed 7.595s CPU time.
Oct 08 16:11:26 compute-0 systemd-logind[847]: Session 12 logged out. Waiting for processes to exit.
Oct 08 16:11:26 compute-0 systemd-logind[847]: Removed session 12.
Oct 08 16:11:27 compute-0 podman[140625]: 2025-10-08 16:11:27.474500746 +0000 UTC m=+0.079043347 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:11:29 compute-0 podman[127881]: time="2025-10-08T16:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:11:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:11:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3008 "" "Go-http-client/1.1"
Oct 08 16:11:31 compute-0 openstack_network_exporter[130039]: ERROR   16:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:11:31 compute-0 openstack_network_exporter[130039]: ERROR   16:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:11:31 compute-0 openstack_network_exporter[130039]: ERROR   16:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:11:31 compute-0 openstack_network_exporter[130039]: ERROR   16:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:11:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:11:31 compute-0 openstack_network_exporter[130039]: ERROR   16:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:11:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:11:32 compute-0 podman[140647]: 2025-10-08 16:11:32.469068506 +0000 UTC m=+0.065946231 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:11:32 compute-0 podman[140648]: 2025-10-08 16:11:32.533356808 +0000 UTC m=+0.124305985 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 16:11:36 compute-0 systemd[1]: Stopping User Manager for UID 1000...
Oct 08 16:11:36 compute-0 systemd[1324]: Activating special unit Exit the Session...
Oct 08 16:11:36 compute-0 systemd[1324]: Removed slice User Background Tasks Slice.
Oct 08 16:11:36 compute-0 systemd[1324]: Stopped target Main User Target.
Oct 08 16:11:36 compute-0 systemd[1324]: Stopped target Basic System.
Oct 08 16:11:36 compute-0 systemd[1324]: Stopped target Paths.
Oct 08 16:11:36 compute-0 systemd[1324]: Stopped target Sockets.
Oct 08 16:11:36 compute-0 systemd[1324]: Stopped target Timers.
Oct 08 16:11:36 compute-0 systemd[1324]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 08 16:11:36 compute-0 systemd[1324]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 08 16:11:36 compute-0 systemd[1324]: Closed D-Bus User Message Bus Socket.
Oct 08 16:11:36 compute-0 systemd[1324]: Stopped Create User's Volatile Files and Directories.
Oct 08 16:11:36 compute-0 systemd[1324]: Removed slice User Application Slice.
Oct 08 16:11:36 compute-0 systemd[1324]: Reached target Shutdown.
Oct 08 16:11:36 compute-0 systemd[1324]: Finished Exit the Session.
Oct 08 16:11:36 compute-0 systemd[1324]: Reached target Exit the Session.
Oct 08 16:11:36 compute-0 systemd[1]: user@1000.service: Deactivated successfully.
Oct 08 16:11:36 compute-0 systemd[1]: Stopped User Manager for UID 1000.
Oct 08 16:11:36 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/1000...
Oct 08 16:11:36 compute-0 systemd[1]: run-user-1000.mount: Deactivated successfully.
Oct 08 16:11:36 compute-0 systemd[1]: user-runtime-dir@1000.service: Deactivated successfully.
Oct 08 16:11:36 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/1000.
Oct 08 16:11:36 compute-0 systemd[1]: Removed slice User Slice of UID 1000.
Oct 08 16:11:36 compute-0 systemd[1]: user-1000.slice: Consumed 9min 52.309s CPU time.
Oct 08 16:11:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:11:41.876 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:11:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:11:41.876 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:11:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:11:41.877 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:11:44 compute-0 podman[140697]: 2025-10-08 16:11:44.459442867 +0000 UTC m=+0.066995509 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 08 16:11:47 compute-0 podman[140719]: 2025-10-08 16:11:47.480927459 +0000 UTC m=+0.090817990 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, managed_by=edpm_ansible, vcs-type=git, config_id=edpm, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., version=9.6, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 08 16:11:48 compute-0 nova_compute[117413]: 2025-10-08 16:11:48.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:11:48 compute-0 nova_compute[117413]: 2025-10-08 16:11:48.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:11:48 compute-0 nova_compute[117413]: 2025-10-08 16:11:48.363 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:11:50 compute-0 nova_compute[117413]: 2025-10-08 16:11:50.358 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:11:50 compute-0 nova_compute[117413]: 2025-10-08 16:11:50.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:11:50 compute-0 nova_compute[117413]: 2025-10-08 16:11:50.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:11:51 compute-0 nova_compute[117413]: 2025-10-08 16:11:51.357 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:11:51 compute-0 nova_compute[117413]: 2025-10-08 16:11:51.872 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:11:51 compute-0 nova_compute[117413]: 2025-10-08 16:11:51.872 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:11:52 compute-0 nova_compute[117413]: 2025-10-08 16:11:52.443 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:11:52 compute-0 nova_compute[117413]: 2025-10-08 16:11:52.444 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:11:52 compute-0 nova_compute[117413]: 2025-10-08 16:11:52.444 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:11:52 compute-0 nova_compute[117413]: 2025-10-08 16:11:52.444 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:11:52 compute-0 nova_compute[117413]: 2025-10-08 16:11:52.599 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:11:52 compute-0 nova_compute[117413]: 2025-10-08 16:11:52.601 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:11:52 compute-0 nova_compute[117413]: 2025-10-08 16:11:52.622 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.021s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:11:52 compute-0 nova_compute[117413]: 2025-10-08 16:11:52.623 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6403MB free_disk=73.30728149414062GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:11:52 compute-0 nova_compute[117413]: 2025-10-08 16:11:52.623 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:11:52 compute-0 nova_compute[117413]: 2025-10-08 16:11:52.624 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:11:53 compute-0 nova_compute[117413]: 2025-10-08 16:11:53.703 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:11:53 compute-0 nova_compute[117413]: 2025-10-08 16:11:53.703 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:11:52 up 20 min,  0 user,  load average: 0.05, 0.29, 0.36\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:11:53 compute-0 nova_compute[117413]: 2025-10-08 16:11:53.722 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:11:54 compute-0 nova_compute[117413]: 2025-10-08 16:11:54.235 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:11:54 compute-0 podman[140742]: 2025-10-08 16:11:54.481309823 +0000 UTC m=+0.075686129 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 08 16:11:54 compute-0 nova_compute[117413]: 2025-10-08 16:11:54.752 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:11:54 compute-0 nova_compute[117413]: 2025-10-08 16:11:54.752 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.129s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:11:55 compute-0 nova_compute[117413]: 2025-10-08 16:11:55.242 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:11:58 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:11:58.313 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:11:58 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:11:58.315 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:11:58 compute-0 podman[140764]: 2025-10-08 16:11:58.452193937 +0000 UTC m=+0.056468890 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:11:59 compute-0 podman[127881]: time="2025-10-08T16:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:11:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:11:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3004 "" "Go-http-client/1.1"
Oct 08 16:12:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:12:01.255 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:f0:e7 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-929c27ce-c1fc-4118-b3b6-4e3c768ad795', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-929c27ce-c1fc-4118-b3b6-4e3c768ad795', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9755034a1554bc3b9a7f4c940f906d4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f564cd5c-5411-4498-908d-01a0ab8ca30f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=72e0a9b7-e06f-4959-bb64-72b164d3d957) old=Port_Binding(mac=['fa:16:3e:7c:f0:e7'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-929c27ce-c1fc-4118-b3b6-4e3c768ad795', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-929c27ce-c1fc-4118-b3b6-4e3c768ad795', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9755034a1554bc3b9a7f4c940f906d4', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:12:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:12:01.257 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 72e0a9b7-e06f-4959-bb64-72b164d3d957 in datapath 929c27ce-c1fc-4118-b3b6-4e3c768ad795 updated
Oct 08 16:12:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:12:01.258 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 929c27ce-c1fc-4118-b3b6-4e3c768ad795, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:12:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:12:01.258 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c20fd9-3ebd-4f0f-86d7-f9f54f5ede62]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:12:01 compute-0 openstack_network_exporter[130039]: ERROR   16:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:12:01 compute-0 openstack_network_exporter[130039]: ERROR   16:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:12:01 compute-0 openstack_network_exporter[130039]: ERROR   16:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:12:01 compute-0 openstack_network_exporter[130039]: ERROR   16:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:12:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:12:01 compute-0 openstack_network_exporter[130039]: ERROR   16:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:12:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:12:03 compute-0 podman[140783]: 2025-10-08 16:12:03.452278961 +0000 UTC m=+0.060146505 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:12:03 compute-0 podman[140784]: 2025-10-08 16:12:03.486671181 +0000 UTC m=+0.089646036 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251007, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 08 16:12:06 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:12:06.317 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:12:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:12:15.222 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:e7:5f 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-97290135-56f9-4f3e-876f-50bafaef387d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97290135-56f9-4f3e-876f-50bafaef387d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c74d59c827d449295d872fec0dddbd7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb7250b3-3eed-4823-8850-16ebc6a86071, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=acd44618-ce86-4b41-9231-24a6ec9dada0) old=Port_Binding(mac=['fa:16:3e:06:e7:5f'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-97290135-56f9-4f3e-876f-50bafaef387d', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-97290135-56f9-4f3e-876f-50bafaef387d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c74d59c827d449295d872fec0dddbd7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:12:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:12:15.223 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port acd44618-ce86-4b41-9231-24a6ec9dada0 in datapath 97290135-56f9-4f3e-876f-50bafaef387d updated
Oct 08 16:12:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:12:15.224 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 97290135-56f9-4f3e-876f-50bafaef387d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:12:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:12:15.225 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[cb9d65bc-1113-45d3-a857-0e22bc998e96]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:12:15 compute-0 podman[140833]: 2025-10-08 16:12:15.466248124 +0000 UTC m=+0.060088283 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 08 16:12:18 compute-0 podman[140854]: 2025-10-08 16:12:18.447938311 +0000 UTC m=+0.053483705 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 08 16:12:25 compute-0 podman[140875]: 2025-10-08 16:12:25.470891338 +0000 UTC m=+0.073615969 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest)
Oct 08 16:12:29 compute-0 podman[140895]: 2025-10-08 16:12:29.485710896 +0000 UTC m=+0.092314723 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 08 16:12:29 compute-0 podman[127881]: time="2025-10-08T16:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:12:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:12:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3005 "" "Go-http-client/1.1"
Oct 08 16:12:31 compute-0 openstack_network_exporter[130039]: ERROR   16:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:12:31 compute-0 openstack_network_exporter[130039]: ERROR   16:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:12:31 compute-0 openstack_network_exporter[130039]: ERROR   16:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:12:31 compute-0 openstack_network_exporter[130039]: ERROR   16:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:12:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:12:31 compute-0 openstack_network_exporter[130039]: ERROR   16:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:12:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:12:34 compute-0 podman[140915]: 2025-10-08 16:12:34.451930264 +0000 UTC m=+0.061108333 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:12:34 compute-0 podman[140916]: 2025-10-08 16:12:34.491703838 +0000 UTC m=+0.096125211 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 08 16:12:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:12:41.877 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:12:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:12:41.878 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:12:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:12:41.878 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:12:46 compute-0 podman[140968]: 2025-10-08 16:12:46.440479192 +0000 UTC m=+0.048840583 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 08 16:12:49 compute-0 nova_compute[117413]: 2025-10-08 16:12:49.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:12:49 compute-0 nova_compute[117413]: 2025-10-08 16:12:49.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:12:49 compute-0 nova_compute[117413]: 2025-10-08 16:12:49.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:12:49 compute-0 podman[140988]: 2025-10-08 16:12:49.441262044 +0000 UTC m=+0.052280741 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=edpm, vendor=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 08 16:12:50 compute-0 nova_compute[117413]: 2025-10-08 16:12:50.357 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:12:50 compute-0 nova_compute[117413]: 2025-10-08 16:12:50.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:12:51 compute-0 nova_compute[117413]: 2025-10-08 16:12:51.360 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:12:51 compute-0 nova_compute[117413]: 2025-10-08 16:12:51.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:12:51 compute-0 nova_compute[117413]: 2025-10-08 16:12:51.941 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:12:51 compute-0 nova_compute[117413]: 2025-10-08 16:12:51.941 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:12:51 compute-0 nova_compute[117413]: 2025-10-08 16:12:51.942 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:12:51 compute-0 nova_compute[117413]: 2025-10-08 16:12:51.942 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:12:52 compute-0 nova_compute[117413]: 2025-10-08 16:12:52.062 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:12:52 compute-0 nova_compute[117413]: 2025-10-08 16:12:52.064 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:12:52 compute-0 nova_compute[117413]: 2025-10-08 16:12:52.089 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.025s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:12:52 compute-0 nova_compute[117413]: 2025-10-08 16:12:52.090 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6413MB free_disk=73.30728149414062GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:12:52 compute-0 nova_compute[117413]: 2025-10-08 16:12:52.090 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:12:52 compute-0 nova_compute[117413]: 2025-10-08 16:12:52.091 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:12:53 compute-0 nova_compute[117413]: 2025-10-08 16:12:53.292 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:12:53 compute-0 nova_compute[117413]: 2025-10-08 16:12:53.293 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:12:52 up 21 min,  0 user,  load average: 0.02, 0.24, 0.33\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:12:53 compute-0 nova_compute[117413]: 2025-10-08 16:12:53.316 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:12:53 compute-0 nova_compute[117413]: 2025-10-08 16:12:53.891 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:12:54 compute-0 nova_compute[117413]: 2025-10-08 16:12:54.407 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:12:54 compute-0 nova_compute[117413]: 2025-10-08 16:12:54.407 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.317s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:12:55 compute-0 nova_compute[117413]: 2025-10-08 16:12:55.409 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:12:55 compute-0 nova_compute[117413]: 2025-10-08 16:12:55.409 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:12:56 compute-0 podman[141010]: 2025-10-08 16:12:56.471809139 +0000 UTC m=+0.083847771 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 08 16:12:59 compute-0 podman[127881]: time="2025-10-08T16:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:12:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:12:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3008 "" "Go-http-client/1.1"
Oct 08 16:13:00 compute-0 podman[141030]: 2025-10-08 16:13:00.443842595 +0000 UTC m=+0.055373928 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 08 16:13:01 compute-0 openstack_network_exporter[130039]: ERROR   16:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:13:01 compute-0 openstack_network_exporter[130039]: ERROR   16:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:13:01 compute-0 openstack_network_exporter[130039]: ERROR   16:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:13:01 compute-0 openstack_network_exporter[130039]: ERROR   16:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:13:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:13:01 compute-0 openstack_network_exporter[130039]: ERROR   16:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:13:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:13:05 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:13:05.239 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:13:05 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:13:05.240 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:13:05 compute-0 podman[141051]: 2025-10-08 16:13:05.440193173 +0000 UTC m=+0.044019596 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 08 16:13:05 compute-0 podman[141052]: 2025-10-08 16:13:05.475781038 +0000 UTC m=+0.075530814 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, managed_by=edpm_ansible)
Oct 08 16:13:10 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:13:10.242 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:13:17 compute-0 podman[141104]: 2025-10-08 16:13:17.451984185 +0000 UTC m=+0.054441542 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 16:13:20 compute-0 podman[141124]: 2025-10-08 16:13:20.475212265 +0000 UTC m=+0.076798870 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_id=edpm)
Oct 08 16:13:27 compute-0 podman[141145]: 2025-10-08 16:13:27.444660938 +0000 UTC m=+0.053563407 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20251007, config_id=iscsid, io.buildah.version=1.41.4)
Oct 08 16:13:29 compute-0 podman[127881]: time="2025-10-08T16:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:13:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:13:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3011 "" "Go-http-client/1.1"
Oct 08 16:13:31 compute-0 openstack_network_exporter[130039]: ERROR   16:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:13:31 compute-0 openstack_network_exporter[130039]: ERROR   16:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:13:31 compute-0 openstack_network_exporter[130039]: ERROR   16:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:13:31 compute-0 openstack_network_exporter[130039]: ERROR   16:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:13:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:13:31 compute-0 openstack_network_exporter[130039]: ERROR   16:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:13:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:13:31 compute-0 podman[141165]: 2025-10-08 16:13:31.475827282 +0000 UTC m=+0.072936220 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 16:13:36 compute-0 podman[141184]: 2025-10-08 16:13:36.46171187 +0000 UTC m=+0.065320333 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:13:36 compute-0 podman[141185]: 2025-10-08 16:13:36.496182592 +0000 UTC m=+0.098090687 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4)
Oct 08 16:13:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:13:41.879 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:13:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:13:41.879 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:13:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:13:41.880 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:13:48 compute-0 podman[141235]: 2025-10-08 16:13:48.445482978 +0000 UTC m=+0.056198190 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 08 16:13:51 compute-0 nova_compute[117413]: 2025-10-08 16:13:51.357 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:13:51 compute-0 nova_compute[117413]: 2025-10-08 16:13:51.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:13:51 compute-0 nova_compute[117413]: 2025-10-08 16:13:51.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:13:51 compute-0 nova_compute[117413]: 2025-10-08 16:13:51.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:13:51 compute-0 nova_compute[117413]: 2025-10-08 16:13:51.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:13:51 compute-0 nova_compute[117413]: 2025-10-08 16:13:51.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:13:51 compute-0 podman[141256]: 2025-10-08 16:13:51.473185135 +0000 UTC m=+0.072662531 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.openshift.expose-services=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 08 16:13:51 compute-0 nova_compute[117413]: 2025-10-08 16:13:51.875 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:13:51 compute-0 nova_compute[117413]: 2025-10-08 16:13:51.876 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:13:51 compute-0 nova_compute[117413]: 2025-10-08 16:13:51.876 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:13:51 compute-0 nova_compute[117413]: 2025-10-08 16:13:51.876 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:13:52 compute-0 nova_compute[117413]: 2025-10-08 16:13:52.040 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:13:52 compute-0 nova_compute[117413]: 2025-10-08 16:13:52.041 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:13:52 compute-0 nova_compute[117413]: 2025-10-08 16:13:52.057 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.016s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:13:52 compute-0 nova_compute[117413]: 2025-10-08 16:13:52.057 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6404MB free_disk=73.30744171142578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:13:52 compute-0 nova_compute[117413]: 2025-10-08 16:13:52.057 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:13:52 compute-0 nova_compute[117413]: 2025-10-08 16:13:52.058 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:13:53 compute-0 nova_compute[117413]: 2025-10-08 16:13:53.305 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:13:53 compute-0 nova_compute[117413]: 2025-10-08 16:13:53.306 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:13:52 up 22 min,  0 user,  load average: 0.00, 0.19, 0.31\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:13:53 compute-0 nova_compute[117413]: 2025-10-08 16:13:53.327 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:13:53 compute-0 nova_compute[117413]: 2025-10-08 16:13:53.872 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:13:54 compute-0 nova_compute[117413]: 2025-10-08 16:13:54.411 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:13:54 compute-0 nova_compute[117413]: 2025-10-08 16:13:54.412 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.354s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:13:54 compute-0 nova_compute[117413]: 2025-10-08 16:13:54.412 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:13:54 compute-0 nova_compute[117413]: 2025-10-08 16:13:54.412 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Oct 08 16:13:54 compute-0 nova_compute[117413]: 2025-10-08 16:13:54.937 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Oct 08 16:13:54 compute-0 nova_compute[117413]: 2025-10-08 16:13:54.938 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:13:54 compute-0 nova_compute[117413]: 2025-10-08 16:13:54.938 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Oct 08 16:13:56 compute-0 nova_compute[117413]: 2025-10-08 16:13:56.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:13:56 compute-0 nova_compute[117413]: 2025-10-08 16:13:56.903 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:13:56 compute-0 nova_compute[117413]: 2025-10-08 16:13:56.903 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:13:56 compute-0 nova_compute[117413]: 2025-10-08 16:13:56.903 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:13:56 compute-0 nova_compute[117413]: 2025-10-08 16:13:56.904 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:13:57 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:13:57.457 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:f4:4b 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57288f32-6428-49fa-ad0a-b8df4f9be3d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e161b3cceb84b36879e8add64d97803', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9abdd97c-a47b-441d-91b7-836dd83a3aee, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a50b613f-e24b-4ce5-abb2-a0a732213099) old=Port_Binding(mac=['fa:16:3e:76:f4:4b'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57288f32-6428-49fa-ad0a-b8df4f9be3d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e161b3cceb84b36879e8add64d97803', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:13:57 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:13:57.459 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a50b613f-e24b-4ce5-abb2-a0a732213099 in datapath 57288f32-6428-49fa-ad0a-b8df4f9be3d5 updated
Oct 08 16:13:57 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:13:57.459 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 57288f32-6428-49fa-ad0a-b8df4f9be3d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:13:57 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:13:57.461 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[4e15e5f5-2e8e-411b-bc4d-0752935b08d4]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:13:58 compute-0 podman[141278]: 2025-10-08 16:13:58.452819588 +0000 UTC m=+0.057198088 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 16:13:59 compute-0 podman[127881]: time="2025-10-08T16:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:13:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:13:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3003 "" "Go-http-client/1.1"
Oct 08 16:14:01 compute-0 openstack_network_exporter[130039]: ERROR   16:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:14:01 compute-0 openstack_network_exporter[130039]: ERROR   16:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:14:01 compute-0 openstack_network_exporter[130039]: ERROR   16:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:14:01 compute-0 openstack_network_exporter[130039]: ERROR   16:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:14:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:14:01 compute-0 openstack_network_exporter[130039]: ERROR   16:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:14:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:14:02 compute-0 podman[141299]: 2025-10-08 16:14:02.452991056 +0000 UTC m=+0.061381029 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 16:14:05 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:05.830 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:14:05 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:05.832 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:14:06 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:06.708 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a2:a5:13 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-89fd0259-b1c1-4068-b322-4b308b602c18', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-89fd0259-b1c1-4068-b322-4b308b602c18', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acf253db72ef46d79883318c90b63116', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=febc715f-facb-42c9-9584-3fbe0dd43c04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a4bb3b71-4c62-4aa5-b745-51f48266db4f) old=Port_Binding(mac=['fa:16:3e:a2:a5:13'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-89fd0259-b1c1-4068-b322-4b308b602c18', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-89fd0259-b1c1-4068-b322-4b308b602c18', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acf253db72ef46d79883318c90b63116', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:14:06 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:06.709 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a4bb3b71-4c62-4aa5-b745-51f48266db4f in datapath 89fd0259-b1c1-4068-b322-4b308b602c18 updated
Oct 08 16:14:06 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:06.711 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 89fd0259-b1c1-4068-b322-4b308b602c18, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:14:06 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:06.712 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[72ede43f-0712-4de4-a568-56f634195df5]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:14:07 compute-0 podman[141319]: 2025-10-08 16:14:07.453104639 +0000 UTC m=+0.059923997 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:14:07 compute-0 podman[141320]: 2025-10-08 16:14:07.500066423 +0000 UTC m=+0.101901728 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Oct 08 16:14:10 compute-0 nova_compute[117413]: 2025-10-08 16:14:10.633 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:14:13 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:13.841 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:14:19 compute-0 podman[141368]: 2025-10-08 16:14:19.453849316 +0000 UTC m=+0.061133171 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251007)
Oct 08 16:14:22 compute-0 podman[141390]: 2025-10-08 16:14:22.473099062 +0000 UTC m=+0.075145311 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, distribution-scope=public, name=ubi9-minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7)
Oct 08 16:14:28 compute-0 unix_chkpwd[141413]: password check failed for user (root)
Oct 08 16:14:28 compute-0 sshd-session[141411]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 08 16:14:29 compute-0 podman[141414]: 2025-10-08 16:14:29.442655407 +0000 UTC m=+0.050693591 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 08 16:14:29 compute-0 podman[127881]: time="2025-10-08T16:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:14:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:14:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3009 "" "Go-http-client/1.1"
Oct 08 16:14:30 compute-0 sshd-session[141411]: Failed password for root from 80.94.93.176 port 35418 ssh2
Oct 08 16:14:31 compute-0 openstack_network_exporter[130039]: ERROR   16:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:14:31 compute-0 openstack_network_exporter[130039]: ERROR   16:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:14:31 compute-0 openstack_network_exporter[130039]: ERROR   16:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:14:31 compute-0 openstack_network_exporter[130039]: ERROR   16:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:14:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:14:31 compute-0 openstack_network_exporter[130039]: ERROR   16:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:14:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:14:32 compute-0 unix_chkpwd[141436]: password check failed for user (root)
Oct 08 16:14:33 compute-0 podman[141437]: 2025-10-08 16:14:33.459227393 +0000 UTC m=+0.064869388 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:14:34 compute-0 sshd-session[141411]: Failed password for root from 80.94.93.176 port 35418 ssh2
Oct 08 16:14:36 compute-0 unix_chkpwd[141456]: password check failed for user (root)
Oct 08 16:14:37 compute-0 sshd-session[141411]: Failed password for root from 80.94.93.176 port 35418 ssh2
Oct 08 16:14:37 compute-0 nova_compute[117413]: 2025-10-08 16:14:37.950 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Acquiring lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:14:37 compute-0 nova_compute[117413]: 2025-10-08 16:14:37.951 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:14:38 compute-0 sshd-session[141411]: Received disconnect from 80.94.93.176 port 35418:11:  [preauth]
Oct 08 16:14:38 compute-0 sshd-session[141411]: Disconnected from authenticating user root 80.94.93.176 port 35418 [preauth]
Oct 08 16:14:38 compute-0 sshd-session[141411]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 08 16:14:38 compute-0 podman[141459]: 2025-10-08 16:14:38.439820468 +0000 UTC m=+0.053504253 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 08 16:14:38 compute-0 nova_compute[117413]: 2025-10-08 16:14:38.459 2 DEBUG nova.compute.manager [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 08 16:14:38 compute-0 podman[141460]: 2025-10-08 16:14:38.505938751 +0000 UTC m=+0.115264541 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Oct 08 16:14:38 compute-0 unix_chkpwd[141507]: password check failed for user (root)
Oct 08 16:14:38 compute-0 sshd-session[141457]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 08 16:14:39 compute-0 nova_compute[117413]: 2025-10-08 16:14:39.232 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:14:39 compute-0 nova_compute[117413]: 2025-10-08 16:14:39.233 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:14:39 compute-0 nova_compute[117413]: 2025-10-08 16:14:39.238 2 DEBUG nova.virt.hardware [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 08 16:14:39 compute-0 nova_compute[117413]: 2025-10-08 16:14:39.238 2 INFO nova.compute.claims [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Claim successful on node compute-0.ctlplane.example.com
Oct 08 16:14:41 compute-0 nova_compute[117413]: 2025-10-08 16:14:41.004 2 DEBUG nova.compute.provider_tree [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:14:41 compute-0 sshd-session[141457]: Failed password for root from 80.94.93.176 port 18874 ssh2
Oct 08 16:14:41 compute-0 nova_compute[117413]: 2025-10-08 16:14:41.513 2 DEBUG nova.scheduler.client.report [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:14:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:41.880 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:14:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:41.881 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:14:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:41.881 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:14:42 compute-0 nova_compute[117413]: 2025-10-08 16:14:42.021 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.789s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:14:42 compute-0 nova_compute[117413]: 2025-10-08 16:14:42.022 2 DEBUG nova.compute.manager [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 08 16:14:42 compute-0 nova_compute[117413]: 2025-10-08 16:14:42.535 2 DEBUG nova.compute.manager [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 08 16:14:42 compute-0 nova_compute[117413]: 2025-10-08 16:14:42.536 2 DEBUG nova.network.neutron [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 08 16:14:42 compute-0 nova_compute[117413]: 2025-10-08 16:14:42.538 2 WARNING neutronclient.v2_0.client [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:14:42 compute-0 nova_compute[117413]: 2025-10-08 16:14:42.540 2 WARNING neutronclient.v2_0.client [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:14:42 compute-0 unix_chkpwd[141509]: password check failed for user (root)
Oct 08 16:14:43 compute-0 nova_compute[117413]: 2025-10-08 16:14:43.050 2 INFO nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 08 16:14:43 compute-0 nova_compute[117413]: 2025-10-08 16:14:43.562 2 DEBUG nova.compute.manager [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 08 16:14:44 compute-0 sshd-session[141457]: Failed password for root from 80.94.93.176 port 18874 ssh2
Oct 08 16:14:44 compute-0 nova_compute[117413]: 2025-10-08 16:14:44.584 2 DEBUG nova.compute.manager [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 08 16:14:44 compute-0 nova_compute[117413]: 2025-10-08 16:14:44.586 2 DEBUG nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 08 16:14:44 compute-0 nova_compute[117413]: 2025-10-08 16:14:44.587 2 INFO nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Creating image(s)
Oct 08 16:14:44 compute-0 nova_compute[117413]: 2025-10-08 16:14:44.588 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Acquiring lock "/var/lib/nova/instances/f1a00ac6-63aa-402b-b689-ace4f0b21ae0/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:14:44 compute-0 nova_compute[117413]: 2025-10-08 16:14:44.588 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lock "/var/lib/nova/instances/f1a00ac6-63aa-402b-b689-ace4f0b21ae0/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:14:44 compute-0 nova_compute[117413]: 2025-10-08 16:14:44.589 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lock "/var/lib/nova/instances/f1a00ac6-63aa-402b-b689-ace4f0b21ae0/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:14:44 compute-0 nova_compute[117413]: 2025-10-08 16:14:44.589 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:14:44 compute-0 nova_compute[117413]: 2025-10-08 16:14:44.590 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:14:45 compute-0 unix_chkpwd[141510]: password check failed for user (root)
Oct 08 16:14:45 compute-0 nova_compute[117413]: 2025-10-08 16:14:45.619 2 DEBUG oslo_utils.imageutils.format_inspector [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'QFI\xfb') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:14:45 compute-0 nova_compute[117413]: 2025-10-08 16:14:45.624 2 DEBUG oslo_utils.imageutils.format_inspector [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:14:45 compute-0 nova_compute[117413]: 2025-10-08 16:14:45.624 2 DEBUG oslo_concurrency.processutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61.part --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:14:45 compute-0 nova_compute[117413]: 2025-10-08 16:14:45.714 2 DEBUG oslo_concurrency.processutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61.part --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:14:45 compute-0 nova_compute[117413]: 2025-10-08 16:14:45.715 2 DEBUG nova.virt.images [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] 44390e9d-4b05-4916-9ba9-97b19c79ef43 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.12/site-packages/nova/virt/images.py:278
Oct 08 16:14:45 compute-0 nova_compute[117413]: 2025-10-08 16:14:45.716 2 DEBUG nova.privsep.utils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.12/site-packages/nova/privsep/utils.py:63
Oct 08 16:14:45 compute-0 nova_compute[117413]: 2025-10-08 16:14:45.717 2 DEBUG oslo_concurrency.processutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61.part /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61.converted execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:14:45 compute-0 nova_compute[117413]: 2025-10-08 16:14:45.894 2 DEBUG oslo_concurrency.processutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61.part /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61.converted" returned: 0 in 0.178s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:14:45 compute-0 nova_compute[117413]: 2025-10-08 16:14:45.901 2 DEBUG oslo_concurrency.processutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61.converted --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:14:45 compute-0 nova_compute[117413]: 2025-10-08 16:14:45.977 2 DEBUG nova.network.neutron [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Successfully created port: f84fa974-5937-4d9b-8926-ae9b528e5aa9 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 08 16:14:45 compute-0 nova_compute[117413]: 2025-10-08 16:14:45.984 2 DEBUG oslo_concurrency.processutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61.converted --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:14:45 compute-0 nova_compute[117413]: 2025-10-08 16:14:45.985 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.395s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:14:45 compute-0 nova_compute[117413]: 2025-10-08 16:14:45.986 2 DEBUG oslo_utils.imageutils.format_inspector [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:14:45 compute-0 nova_compute[117413]: 2025-10-08 16:14:45.992 2 DEBUG oslo_utils.imageutils.format_inspector [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:14:45 compute-0 nova_compute[117413]: 2025-10-08 16:14:45.994 2 INFO oslo.privsep.daemon [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpayyn2mm2/privsep.sock']
Oct 08 16:14:46 compute-0 nova_compute[117413]: 2025-10-08 16:14:46.804 2 INFO oslo.privsep.daemon [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Spawned new privsep daemon via rootwrap
Oct 08 16:14:46 compute-0 nova_compute[117413]: 2025-10-08 16:14:46.620 66 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 08 16:14:46 compute-0 nova_compute[117413]: 2025-10-08 16:14:46.627 66 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 08 16:14:46 compute-0 nova_compute[117413]: 2025-10-08 16:14:46.630 66 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 08 16:14:46 compute-0 nova_compute[117413]: 2025-10-08 16:14:46.631 66 INFO oslo.privsep.daemon [-] privsep daemon running as pid 66
Oct 08 16:14:46 compute-0 nova_compute[117413]: 2025-10-08 16:14:46.890 2 DEBUG oslo_concurrency.processutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:14:46 compute-0 nova_compute[117413]: 2025-10-08 16:14:46.947 2 DEBUG oslo_concurrency.processutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:14:46 compute-0 nova_compute[117413]: 2025-10-08 16:14:46.949 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:14:46 compute-0 nova_compute[117413]: 2025-10-08 16:14:46.950 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:14:46 compute-0 nova_compute[117413]: 2025-10-08 16:14:46.950 2 DEBUG oslo_utils.imageutils.format_inspector [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:14:46 compute-0 nova_compute[117413]: 2025-10-08 16:14:46.954 2 DEBUG oslo_utils.imageutils.format_inspector [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:14:46 compute-0 nova_compute[117413]: 2025-10-08 16:14:46.955 2 DEBUG oslo_concurrency.processutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.008 2 DEBUG oslo_concurrency.processutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.010 2 DEBUG oslo_concurrency.processutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/f1a00ac6-63aa-402b-b689-ace4f0b21ae0/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:14:47 compute-0 sshd-session[141457]: Failed password for root from 80.94.93.176 port 18874 ssh2
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.047 2 DEBUG oslo_concurrency.processutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/f1a00ac6-63aa-402b-b689-ace4f0b21ae0/disk 1073741824" returned: 0 in 0.038s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.049 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.099s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.049 2 DEBUG oslo_concurrency.processutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.105 2 DEBUG oslo_concurrency.processutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.106 2 DEBUG nova.virt.disk.api [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Checking if we can resize image /var/lib/nova/instances/f1a00ac6-63aa-402b-b689-ace4f0b21ae0/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.107 2 DEBUG oslo_concurrency.processutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f1a00ac6-63aa-402b-b689-ace4f0b21ae0/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.165 2 DEBUG oslo_concurrency.processutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f1a00ac6-63aa-402b-b689-ace4f0b21ae0/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.166 2 DEBUG nova.virt.disk.api [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Cannot resize image /var/lib/nova/instances/f1a00ac6-63aa-402b-b689-ace4f0b21ae0/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.168 2 DEBUG nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.168 2 DEBUG nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Ensure instance console log exists: /var/lib/nova/instances/f1a00ac6-63aa-402b-b689-ace4f0b21ae0/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.169 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.170 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.170 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.274 2 DEBUG nova.network.neutron [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Successfully updated port: f84fa974-5937-4d9b-8926-ae9b528e5aa9 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 08 16:14:47 compute-0 sshd-session[141457]: Received disconnect from 80.94.93.176 port 18874:11:  [preauth]
Oct 08 16:14:47 compute-0 sshd-session[141457]: Disconnected from authenticating user root 80.94.93.176 port 18874 [preauth]
Oct 08 16:14:47 compute-0 sshd-session[141457]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.329 2 DEBUG nova.compute.manager [req-7c5ba8e7-a6b9-4d74-af8b-f5849cd05a27 req-8505f899-6813-4b52-8188-bda4670319cd c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Received event network-changed-f84fa974-5937-4d9b-8926-ae9b528e5aa9 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.329 2 DEBUG nova.compute.manager [req-7c5ba8e7-a6b9-4d74-af8b-f5849cd05a27 req-8505f899-6813-4b52-8188-bda4670319cd c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Refreshing instance network info cache due to event network-changed-f84fa974-5937-4d9b-8926-ae9b528e5aa9. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.330 2 DEBUG oslo_concurrency.lockutils [req-7c5ba8e7-a6b9-4d74-af8b-f5849cd05a27 req-8505f899-6813-4b52-8188-bda4670319cd c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-f1a00ac6-63aa-402b-b689-ace4f0b21ae0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.330 2 DEBUG oslo_concurrency.lockutils [req-7c5ba8e7-a6b9-4d74-af8b-f5849cd05a27 req-8505f899-6813-4b52-8188-bda4670319cd c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-f1a00ac6-63aa-402b-b689-ace4f0b21ae0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.330 2 DEBUG nova.network.neutron [req-7c5ba8e7-a6b9-4d74-af8b-f5849cd05a27 req-8505f899-6813-4b52-8188-bda4670319cd c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Refreshing network info cache for port f84fa974-5937-4d9b-8926-ae9b528e5aa9 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.787 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Acquiring lock "refresh_cache-f1a00ac6-63aa-402b-b689-ace4f0b21ae0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:14:47 compute-0 nova_compute[117413]: 2025-10-08 16:14:47.846 2 WARNING neutronclient.v2_0.client [req-7c5ba8e7-a6b9-4d74-af8b-f5849cd05a27 req-8505f899-6813-4b52-8188-bda4670319cd c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:14:48 compute-0 unix_chkpwd[141549]: password check failed for user (root)
Oct 08 16:14:48 compute-0 sshd-session[141547]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 08 16:14:48 compute-0 nova_compute[117413]: 2025-10-08 16:14:48.757 2 DEBUG nova.network.neutron [req-7c5ba8e7-a6b9-4d74-af8b-f5849cd05a27 req-8505f899-6813-4b52-8188-bda4670319cd c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:14:48 compute-0 nova_compute[117413]: 2025-10-08 16:14:48.908 2 DEBUG nova.network.neutron [req-7c5ba8e7-a6b9-4d74-af8b-f5849cd05a27 req-8505f899-6813-4b52-8188-bda4670319cd c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:14:49 compute-0 nova_compute[117413]: 2025-10-08 16:14:49.414 2 DEBUG oslo_concurrency.lockutils [req-7c5ba8e7-a6b9-4d74-af8b-f5849cd05a27 req-8505f899-6813-4b52-8188-bda4670319cd c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-f1a00ac6-63aa-402b-b689-ace4f0b21ae0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:14:49 compute-0 nova_compute[117413]: 2025-10-08 16:14:49.415 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Acquired lock "refresh_cache-f1a00ac6-63aa-402b-b689-ace4f0b21ae0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:14:49 compute-0 nova_compute[117413]: 2025-10-08 16:14:49.415 2 DEBUG nova.network.neutron [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:14:49 compute-0 sshd-session[141547]: Failed password for root from 80.94.93.176 port 25618 ssh2
Oct 08 16:14:50 compute-0 unix_chkpwd[141550]: password check failed for user (root)
Oct 08 16:14:50 compute-0 podman[141551]: 2025-10-08 16:14:50.475569379 +0000 UTC m=+0.079743205 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd)
Oct 08 16:14:50 compute-0 nova_compute[117413]: 2025-10-08 16:14:50.768 2 DEBUG nova.network.neutron [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:14:51 compute-0 nova_compute[117413]: 2025-10-08 16:14:51.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:14:51 compute-0 nova_compute[117413]: 2025-10-08 16:14:51.361 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:14:51 compute-0 nova_compute[117413]: 2025-10-08 16:14:51.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:14:51 compute-0 nova_compute[117413]: 2025-10-08 16:14:51.719 2 WARNING neutronclient.v2_0.client [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:14:51 compute-0 nova_compute[117413]: 2025-10-08 16:14:51.851 2 DEBUG nova.network.neutron [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Updating instance_info_cache with network_info: [{"id": "f84fa974-5937-4d9b-8926-ae9b528e5aa9", "address": "fa:16:3e:63:81:67", "network": {"id": "57288f32-6428-49fa-ad0a-b8df4f9be3d5", "bridge": "br-int", "label": "tempest-TestDataModel-776922827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e161b3cceb84b36879e8add64d97803", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf84fa974-59", "ovs_interfaceid": "f84fa974-5937-4d9b-8926-ae9b528e5aa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:14:51 compute-0 nova_compute[117413]: 2025-10-08 16:14:51.873 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:14:51 compute-0 nova_compute[117413]: 2025-10-08 16:14:51.873 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:14:51 compute-0 nova_compute[117413]: 2025-10-08 16:14:51.873 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:14:51 compute-0 nova_compute[117413]: 2025-10-08 16:14:51.874 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.020 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.021 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.037 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.016s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.038 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6320MB free_disk=73.27278900146484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.038 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.038 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.357 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Releasing lock "refresh_cache-f1a00ac6-63aa-402b-b689-ace4f0b21ae0" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.358 2 DEBUG nova.compute.manager [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Instance network_info: |[{"id": "f84fa974-5937-4d9b-8926-ae9b528e5aa9", "address": "fa:16:3e:63:81:67", "network": {"id": "57288f32-6428-49fa-ad0a-b8df4f9be3d5", "bridge": "br-int", "label": "tempest-TestDataModel-776922827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e161b3cceb84b36879e8add64d97803", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf84fa974-59", "ovs_interfaceid": "f84fa974-5937-4d9b-8926-ae9b528e5aa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.362 2 DEBUG nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Start _get_guest_xml network_info=[{"id": "f84fa974-5937-4d9b-8926-ae9b528e5aa9", "address": "fa:16:3e:63:81:67", "network": {"id": "57288f32-6428-49fa-ad0a-b8df4f9be3d5", "bridge": "br-int", "label": "tempest-TestDataModel-776922827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e161b3cceb84b36879e8add64d97803", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf84fa974-59", "ovs_interfaceid": "f84fa974-5937-4d9b-8926-ae9b528e5aa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'boot_index': 0, 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': '44390e9d-4b05-4916-9ba9-97b19c79ef43'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.368 2 WARNING nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.370 2 DEBUG nova.virt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='44390e9d-4b05-4916-9ba9-97b19c79ef43', instance_meta=NovaInstanceMeta(name='tempest-TestDataModel-server-1960118664', uuid='f1a00ac6-63aa-402b-b689-ace4f0b21ae0'), owner=OwnerMeta(userid='d8bc1ce88c7f41a6b2239ab0d22c0a88', username='tempest-TestDataModel-967797023-project-admin', projectid='acf253db72ef46d79883318c90b63116', projectname='tempest-TestDataModel-967797023'), image=ImageMeta(id='44390e9d-4b05-4916-9ba9-97b19c79ef43', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='43cd5d45-bd07-4889-a671-dd23291090c1', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "f84fa974-5937-4d9b-8926-ae9b528e5aa9", "address": "fa:16:3e:63:81:67", "network": {"id": "57288f32-6428-49fa-ad0a-b8df4f9be3d5", "bridge": "br-int", "label": "tempest-TestDataModel-776922827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e161b3cceb84b36879e8add64d97803", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf84fa974-59", "ovs_interfaceid": "f84fa974-5937-4d9b-8926-ae9b528e5aa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008114656.23cad1d.el10', creation_time=1759940092.3706205) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.375 2 DEBUG nova.virt.libvirt.host [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.376 2 DEBUG nova.virt.libvirt.host [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.380 2 DEBUG nova.virt.libvirt.host [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.381 2 DEBUG nova.virt.libvirt.host [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.382 2 DEBUG nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.382 2 DEBUG nova.virt.hardware [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-08T16:08:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='43cd5d45-bd07-4889-a671-dd23291090c1',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.383 2 DEBUG nova.virt.hardware [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.383 2 DEBUG nova.virt.hardware [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.384 2 DEBUG nova.virt.hardware [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.384 2 DEBUG nova.virt.hardware [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.384 2 DEBUG nova.virt.hardware [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.385 2 DEBUG nova.virt.hardware [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.385 2 DEBUG nova.virt.hardware [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.385 2 DEBUG nova.virt.hardware [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.385 2 DEBUG nova.virt.hardware [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.386 2 DEBUG nova.virt.hardware [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.389 2 DEBUG nova.privsep.utils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.12/site-packages/nova/privsep/utils.py:63
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.390 2 DEBUG nova.virt.libvirt.vif [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:14:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestDataModel-server-1960118664',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testdatamodel-server-1960118664',id=3,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acf253db72ef46d79883318c90b63116',ramdisk_id='',reservation_id='r-rczcegaq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='manager,admin,member,reader',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestDataModel-967797023',owner_user_name='tempest-TestDataModel-967797023-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:14:43Z,user_data=None,user_id='d8bc1ce88c7f41a6b2239ab0d22c0a88',uuid=f1a00ac6-63aa-402b-b689-ace4f0b21ae0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f84fa974-5937-4d9b-8926-ae9b528e5aa9", "address": "fa:16:3e:63:81:67", "network": {"id": "57288f32-6428-49fa-ad0a-b8df4f9be3d5", "bridge": "br-int", "label": "tempest-TestDataModel-776922827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e161b3cceb84b36879e8add64d97803", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf84fa974-59", "ovs_interfaceid": "f84fa974-5937-4d9b-8926-ae9b528e5aa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.391 2 DEBUG nova.network.os_vif_util [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Converting VIF {"id": "f84fa974-5937-4d9b-8926-ae9b528e5aa9", "address": "fa:16:3e:63:81:67", "network": {"id": "57288f32-6428-49fa-ad0a-b8df4f9be3d5", "bridge": "br-int", "label": "tempest-TestDataModel-776922827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e161b3cceb84b36879e8add64d97803", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf84fa974-59", "ovs_interfaceid": "f84fa974-5937-4d9b-8926-ae9b528e5aa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.392 2 DEBUG nova.network.os_vif_util [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:81:67,bridge_name='br-int',has_traffic_filtering=True,id=f84fa974-5937-4d9b-8926-ae9b528e5aa9,network=Network(57288f32-6428-49fa-ad0a-b8df4f9be3d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf84fa974-59') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.393 2 DEBUG nova.objects.instance [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lazy-loading 'pci_devices' on Instance uuid f1a00ac6-63aa-402b-b689-ace4f0b21ae0 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:14:52 compute-0 sshd-session[141547]: Failed password for root from 80.94.93.176 port 25618 ssh2
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.901 2 DEBUG nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] End _get_guest_xml xml=<domain type="kvm">
Oct 08 16:14:52 compute-0 nova_compute[117413]:   <uuid>f1a00ac6-63aa-402b-b689-ace4f0b21ae0</uuid>
Oct 08 16:14:52 compute-0 nova_compute[117413]:   <name>instance-00000003</name>
Oct 08 16:14:52 compute-0 nova_compute[117413]:   <memory>131072</memory>
Oct 08 16:14:52 compute-0 nova_compute[117413]:   <vcpu>1</vcpu>
Oct 08 16:14:52 compute-0 nova_compute[117413]:   <metadata>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <nova:package version="32.1.0-0.20251008114656.23cad1d.el10"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <nova:name>tempest-TestDataModel-server-1960118664</nova:name>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <nova:creationTime>2025-10-08 16:14:52</nova:creationTime>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <nova:flavor name="m1.nano" id="43cd5d45-bd07-4889-a671-dd23291090c1">
Oct 08 16:14:52 compute-0 nova_compute[117413]:         <nova:memory>128</nova:memory>
Oct 08 16:14:52 compute-0 nova_compute[117413]:         <nova:disk>1</nova:disk>
Oct 08 16:14:52 compute-0 nova_compute[117413]:         <nova:swap>0</nova:swap>
Oct 08 16:14:52 compute-0 nova_compute[117413]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 16:14:52 compute-0 nova_compute[117413]:         <nova:vcpus>1</nova:vcpus>
Oct 08 16:14:52 compute-0 nova_compute[117413]:         <nova:extraSpecs>
Oct 08 16:14:52 compute-0 nova_compute[117413]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 08 16:14:52 compute-0 nova_compute[117413]:         </nova:extraSpecs>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       </nova:flavor>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <nova:image uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43">
Oct 08 16:14:52 compute-0 nova_compute[117413]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 08 16:14:52 compute-0 nova_compute[117413]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 08 16:14:52 compute-0 nova_compute[117413]:         <nova:minDisk>1</nova:minDisk>
Oct 08 16:14:52 compute-0 nova_compute[117413]:         <nova:minRam>0</nova:minRam>
Oct 08 16:14:52 compute-0 nova_compute[117413]:         <nova:properties>
Oct 08 16:14:52 compute-0 nova_compute[117413]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 08 16:14:52 compute-0 nova_compute[117413]:         </nova:properties>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       </nova:image>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <nova:owner>
Oct 08 16:14:52 compute-0 nova_compute[117413]:         <nova:user uuid="d8bc1ce88c7f41a6b2239ab0d22c0a88">tempest-TestDataModel-967797023-project-admin</nova:user>
Oct 08 16:14:52 compute-0 nova_compute[117413]:         <nova:project uuid="acf253db72ef46d79883318c90b63116">tempest-TestDataModel-967797023</nova:project>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       </nova:owner>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <nova:root type="image" uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <nova:ports>
Oct 08 16:14:52 compute-0 nova_compute[117413]:         <nova:port uuid="f84fa974-5937-4d9b-8926-ae9b528e5aa9">
Oct 08 16:14:52 compute-0 nova_compute[117413]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:         </nova:port>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       </nova:ports>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     </nova:instance>
Oct 08 16:14:52 compute-0 nova_compute[117413]:   </metadata>
Oct 08 16:14:52 compute-0 nova_compute[117413]:   <sysinfo type="smbios">
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <system>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <entry name="manufacturer">RDO</entry>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <entry name="product">OpenStack Compute</entry>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <entry name="version">32.1.0-0.20251008114656.23cad1d.el10</entry>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <entry name="serial">f1a00ac6-63aa-402b-b689-ace4f0b21ae0</entry>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <entry name="uuid">f1a00ac6-63aa-402b-b689-ace4f0b21ae0</entry>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <entry name="family">Virtual Machine</entry>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     </system>
Oct 08 16:14:52 compute-0 nova_compute[117413]:   </sysinfo>
Oct 08 16:14:52 compute-0 nova_compute[117413]:   <os>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <boot dev="hd"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <smbios mode="sysinfo"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:   </os>
Oct 08 16:14:52 compute-0 nova_compute[117413]:   <features>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <acpi/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <apic/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <vmcoreinfo/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:   </features>
Oct 08 16:14:52 compute-0 nova_compute[117413]:   <clock offset="utc">
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <timer name="hpet" present="no"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:   </clock>
Oct 08 16:14:52 compute-0 nova_compute[117413]:   <cpu mode="host-model" match="exact">
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <topology sockets="1" cores="1" threads="1"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:14:52 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <disk type="file" device="disk">
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/f1a00ac6-63aa-402b-b689-ace4f0b21ae0/disk"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <target dev="vda" bus="virtio"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <disk type="file" device="cdrom">
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <driver name="qemu" type="raw" cache="none"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/f1a00ac6-63aa-402b-b689-ace4f0b21ae0/disk.config"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <target dev="sda" bus="sata"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <interface type="ethernet">
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <mac address="fa:16:3e:63:81:67"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <driver name="vhost" rx_queue_size="512"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <mtu size="1442"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <target dev="tapf84fa974-59"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     </interface>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <serial type="pty">
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/f1a00ac6-63aa-402b-b689-ace4f0b21ae0/console.log" append="off"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     </serial>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <video>
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     </video>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <input type="tablet" bus="usb"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <rng model="virtio">
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <backend model="random">/dev/urandom</backend>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <controller type="usb" index="0"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 08 16:14:52 compute-0 nova_compute[117413]:       <stats period="10"/>
Oct 08 16:14:52 compute-0 nova_compute[117413]:     </memballoon>
Oct 08 16:14:52 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:14:52 compute-0 nova_compute[117413]: </domain>
Oct 08 16:14:52 compute-0 nova_compute[117413]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.902 2 DEBUG nova.compute.manager [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Preparing to wait for external event network-vif-plugged-f84fa974-5937-4d9b-8926-ae9b528e5aa9 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.902 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Acquiring lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.903 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.903 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.904 2 DEBUG nova.virt.libvirt.vif [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:14:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestDataModel-server-1960118664',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testdatamodel-server-1960118664',id=3,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acf253db72ef46d79883318c90b63116',ramdisk_id='',reservation_id='r-rczcegaq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='manager,admin,member,reader',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestDataModel-967797023',owner_user_name='tempest-TestDataModel-967797023-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:14:43Z,user_data=None,user_id='d8bc1ce88c7f41a6b2239ab0d22c0a88',uuid=f1a00ac6-63aa-402b-b689-ace4f0b21ae0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f84fa974-5937-4d9b-8926-ae9b528e5aa9", "address": "fa:16:3e:63:81:67", "network": {"id": "57288f32-6428-49fa-ad0a-b8df4f9be3d5", "bridge": "br-int", "label": "tempest-TestDataModel-776922827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e161b3cceb84b36879e8add64d97803", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf84fa974-59", "ovs_interfaceid": "f84fa974-5937-4d9b-8926-ae9b528e5aa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.904 2 DEBUG nova.network.os_vif_util [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Converting VIF {"id": "f84fa974-5937-4d9b-8926-ae9b528e5aa9", "address": "fa:16:3e:63:81:67", "network": {"id": "57288f32-6428-49fa-ad0a-b8df4f9be3d5", "bridge": "br-int", "label": "tempest-TestDataModel-776922827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e161b3cceb84b36879e8add64d97803", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf84fa974-59", "ovs_interfaceid": "f84fa974-5937-4d9b-8926-ae9b528e5aa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.904 2 DEBUG nova.network.os_vif_util [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:81:67,bridge_name='br-int',has_traffic_filtering=True,id=f84fa974-5937-4d9b-8926-ae9b528e5aa9,network=Network(57288f32-6428-49fa-ad0a-b8df4f9be3d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf84fa974-59') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.905 2 DEBUG os_vif [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:81:67,bridge_name='br-int',has_traffic_filtering=True,id=f84fa974-5937-4d9b-8926-ae9b528e5aa9,network=Network(57288f32-6428-49fa-ad0a-b8df4f9be3d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf84fa974-59') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.984 2 DEBUG ovsdbapp.backend.ovs_idl [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.985 2 DEBUG ovsdbapp.backend.ovs_idl [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.985 2 DEBUG ovsdbapp.backend.ovs_idl [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.12/site-packages/ovs/reconnect.py:519
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [POLLOUT] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.12/site-packages/ovs/reconnect.py:519
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.997 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.997 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:14:52 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.998 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '0efd5ab7-bc9c-57ca-8260-655526cdbd1d', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:14:53 compute-0 nova_compute[117413]: 2025-10-08 16:14:52.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:14:53 compute-0 nova_compute[117413]: 2025-10-08 16:14:53.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:14:53 compute-0 nova_compute[117413]: 2025-10-08 16:14:53.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:14:53 compute-0 nova_compute[117413]: 2025-10-08 16:14:53.002 2 INFO oslo.privsep.daemon [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp1ssw_wx5/privsep.sock']
Oct 08 16:14:53 compute-0 nova_compute[117413]: 2025-10-08 16:14:53.157 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance f1a00ac6-63aa-402b-b689-ace4f0b21ae0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:14:53 compute-0 nova_compute[117413]: 2025-10-08 16:14:53.158 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:14:53 compute-0 nova_compute[117413]: 2025-10-08 16:14:53.158 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:14:52 up 23 min,  0 user,  load average: 0.07, 0.17, 0.29\n', 'num_instances': '1', 'num_vm_building': '1', 'num_task_spawning': '1', 'num_os_type_None': '1', 'num_proj_acf253db72ef46d79883318c90b63116': '1', 'io_workload': '1'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:14:53 compute-0 nova_compute[117413]: 2025-10-08 16:14:53.197 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing inventories for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Oct 08 16:14:53 compute-0 nova_compute[117413]: 2025-10-08 16:14:53.233 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating ProviderTree inventory for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Oct 08 16:14:53 compute-0 nova_compute[117413]: 2025-10-08 16:14:53.234 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating inventory in ProviderTree for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 08 16:14:53 compute-0 nova_compute[117413]: 2025-10-08 16:14:53.245 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing aggregate associations for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Oct 08 16:14:53 compute-0 nova_compute[117413]: 2025-10-08 16:14:53.260 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing trait associations for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8, traits: HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIRTIO_PACKED,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI2,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_SB16,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_ARCH_X86_64,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_SOUND_MODEL_ES1370,HW_ARCH_X86_64,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_VIRTIO_FS,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_TIS,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,HW_CPU_X86_MMX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_IGB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOUND_MODEL_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_SOUND_MODEL_AC97,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ADDRESS_SPACE_EMULATED,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOUND_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_CRB,HW_CPU_X86_SSE42,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_STORAGE_BUS_IDE _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Oct 08 16:14:53 compute-0 nova_compute[117413]: 2025-10-08 16:14:53.291 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating inventory in ProviderTree for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 08 16:14:53 compute-0 podman[141576]: 2025-10-08 16:14:53.454894901 +0000 UTC m=+0.061403749 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 08 16:14:53 compute-0 nova_compute[117413]: 2025-10-08 16:14:53.843 2 INFO oslo.privsep.daemon [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Spawned new privsep daemon via rootwrap
Oct 08 16:14:53 compute-0 nova_compute[117413]: 2025-10-08 16:14:53.643 88 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 08 16:14:53 compute-0 nova_compute[117413]: 2025-10-08 16:14:53.648 88 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 08 16:14:53 compute-0 nova_compute[117413]: 2025-10-08 16:14:53.650 88 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Oct 08 16:14:53 compute-0 nova_compute[117413]: 2025-10-08 16:14:53.650 88 INFO oslo.privsep.daemon [-] privsep daemon running as pid 88
Oct 08 16:14:53 compute-0 nova_compute[117413]: 2025-10-08 16:14:53.870 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updated inventory for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:975
Oct 08 16:14:53 compute-0 nova_compute[117413]: 2025-10-08 16:14:53.870 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Oct 08 16:14:53 compute-0 nova_compute[117413]: 2025-10-08 16:14:53.871 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating inventory in ProviderTree for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 08 16:14:53 compute-0 unix_chkpwd[141601]: password check failed for user (root)
Oct 08 16:14:54 compute-0 nova_compute[117413]: 2025-10-08 16:14:54.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:14:54 compute-0 nova_compute[117413]: 2025-10-08 16:14:54.112 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf84fa974-59, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:14:54 compute-0 nova_compute[117413]: 2025-10-08 16:14:54.112 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapf84fa974-59, col_values=(('qos', UUID('bed83f42-71f1-432b-93e3-c7fbe65a4b75')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:14:54 compute-0 nova_compute[117413]: 2025-10-08 16:14:54.113 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapf84fa974-59, col_values=(('external_ids', {'iface-id': 'f84fa974-5937-4d9b-8926-ae9b528e5aa9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:81:67', 'vm-uuid': 'f1a00ac6-63aa-402b-b689-ace4f0b21ae0'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:14:54 compute-0 nova_compute[117413]: 2025-10-08 16:14:54.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:14:54 compute-0 NetworkManager[1034]: <info>  [1759940094.1159] manager: (tapf84fa974-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Oct 08 16:14:54 compute-0 nova_compute[117413]: 2025-10-08 16:14:54.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:14:54 compute-0 nova_compute[117413]: 2025-10-08 16:14:54.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:14:54 compute-0 nova_compute[117413]: 2025-10-08 16:14:54.124 2 INFO os_vif [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:81:67,bridge_name='br-int',has_traffic_filtering=True,id=f84fa974-5937-4d9b-8926-ae9b528e5aa9,network=Network(57288f32-6428-49fa-ad0a-b8df4f9be3d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf84fa974-59')
Oct 08 16:14:54 compute-0 nova_compute[117413]: 2025-10-08 16:14:54.385 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:14:54 compute-0 nova_compute[117413]: 2025-10-08 16:14:54.385 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.347s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:14:55 compute-0 nova_compute[117413]: 2025-10-08 16:14:55.382 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:14:55 compute-0 nova_compute[117413]: 2025-10-08 16:14:55.383 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:14:55 compute-0 nova_compute[117413]: 2025-10-08 16:14:55.383 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:14:55 compute-0 nova_compute[117413]: 2025-10-08 16:14:55.384 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:14:55 compute-0 nova_compute[117413]: 2025-10-08 16:14:55.384 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:14:55 compute-0 nova_compute[117413]: 2025-10-08 16:14:55.384 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:14:55 compute-0 nova_compute[117413]: 2025-10-08 16:14:55.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:14:55 compute-0 nova_compute[117413]: 2025-10-08 16:14:55.748 2 DEBUG nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:14:55 compute-0 nova_compute[117413]: 2025-10-08 16:14:55.748 2 DEBUG nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:14:55 compute-0 nova_compute[117413]: 2025-10-08 16:14:55.749 2 DEBUG nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] No VIF found with MAC fa:16:3e:63:81:67, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 08 16:14:55 compute-0 nova_compute[117413]: 2025-10-08 16:14:55.749 2 INFO nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Using config drive
Oct 08 16:14:55 compute-0 sshd-session[141547]: Failed password for root from 80.94.93.176 port 25618 ssh2
Oct 08 16:14:55 compute-0 sshd-session[141547]: Received disconnect from 80.94.93.176 port 25618:11:  [preauth]
Oct 08 16:14:55 compute-0 sshd-session[141547]: Disconnected from authenticating user root 80.94.93.176 port 25618 [preauth]
Oct 08 16:14:55 compute-0 sshd-session[141547]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 08 16:14:56 compute-0 nova_compute[117413]: 2025-10-08 16:14:56.280 2 WARNING neutronclient.v2_0.client [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:14:56 compute-0 nova_compute[117413]: 2025-10-08 16:14:56.558 2 INFO nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Creating config drive at /var/lib/nova/instances/f1a00ac6-63aa-402b-b689-ace4f0b21ae0/disk.config
Oct 08 16:14:56 compute-0 nova_compute[117413]: 2025-10-08 16:14:56.563 2 DEBUG oslo_concurrency.processutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f1a00ac6-63aa-402b-b689-ace4f0b21ae0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmpkggnktph execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:14:56 compute-0 nova_compute[117413]: 2025-10-08 16:14:56.699 2 DEBUG oslo_concurrency.processutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f1a00ac6-63aa-402b-b689-ace4f0b21ae0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmpkggnktph" returned: 0 in 0.136s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:14:56 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 08 16:14:56 compute-0 kernel: tapf84fa974-59: entered promiscuous mode
Oct 08 16:14:56 compute-0 NetworkManager[1034]: <info>  [1759940096.7672] manager: (tapf84fa974-59): new Tun device (/org/freedesktop/NetworkManager/Devices/23)
Oct 08 16:14:56 compute-0 nova_compute[117413]: 2025-10-08 16:14:56.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:14:56 compute-0 ovn_controller[19768]: 2025-10-08T16:14:56Z|00040|binding|INFO|Claiming lport f84fa974-5937-4d9b-8926-ae9b528e5aa9 for this chassis.
Oct 08 16:14:56 compute-0 ovn_controller[19768]: 2025-10-08T16:14:56Z|00041|binding|INFO|f84fa974-5937-4d9b-8926-ae9b528e5aa9: Claiming fa:16:3e:63:81:67 10.100.0.6
Oct 08 16:14:56 compute-0 nova_compute[117413]: 2025-10-08 16:14:56.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:14:56 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:56.782 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:81:67 10.100.0.6'], port_security=['fa:16:3e:63:81:67 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f1a00ac6-63aa-402b-b689-ace4f0b21ae0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57288f32-6428-49fa-ad0a-b8df4f9be3d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acf253db72ef46d79883318c90b63116', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd5125f3d-1992-4f39-86c4-a5a8059d457e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9abdd97c-a47b-441d-91b7-836dd83a3aee, chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=f84fa974-5937-4d9b-8926-ae9b528e5aa9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:14:56 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:56.783 28633 INFO neutron.agent.ovn.metadata.agent [-] Port f84fa974-5937-4d9b-8926-ae9b528e5aa9 in datapath 57288f32-6428-49fa-ad0a-b8df4f9be3d5 bound to our chassis
Oct 08 16:14:56 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:56.784 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 57288f32-6428-49fa-ad0a-b8df4f9be3d5
Oct 08 16:14:56 compute-0 systemd-udevd[141625]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:14:56 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:56.804 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[8a0441b9-6aed-4207-aaa6-09f93334f261]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:14:56 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:56.809 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap57288f32-61 in ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 08 16:14:56 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:56.813 139805 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap57288f32-60 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 08 16:14:56 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:56.815 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[b07b3d13-95a2-4b43-a8e3-db32285196bb]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:14:56 compute-0 NetworkManager[1034]: <info>  [1759940096.8166] device (tapf84fa974-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:14:56 compute-0 NetworkManager[1034]: <info>  [1759940096.8172] device (tapf84fa974-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:14:56 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:56.819 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff853e9-416d-4744-9b14-8f67e157817f]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:14:56 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:56.834 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[80555ec5-8097-44e9-8aa5-9c97d5dc2b1b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:14:56 compute-0 nova_compute[117413]: 2025-10-08 16:14:56.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:14:56 compute-0 systemd-machined[77548]: New machine qemu-1-instance-00000003.
Oct 08 16:14:56 compute-0 ovn_controller[19768]: 2025-10-08T16:14:56Z|00042|binding|INFO|Setting lport f84fa974-5937-4d9b-8926-ae9b528e5aa9 ovn-installed in OVS
Oct 08 16:14:56 compute-0 ovn_controller[19768]: 2025-10-08T16:14:56Z|00043|binding|INFO|Setting lport f84fa974-5937-4d9b-8926-ae9b528e5aa9 up in Southbound
Oct 08 16:14:56 compute-0 nova_compute[117413]: 2025-10-08 16:14:56.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:14:56 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:56.854 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8de27a-ccac-437a-876d-c918fc51539d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:14:56 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:56.856 28633 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmptfsxjjq2/privsep.sock']
Oct 08 16:14:56 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000003.
Oct 08 16:14:57 compute-0 nova_compute[117413]: 2025-10-08 16:14:57.050 2 DEBUG nova.compute.manager [req-d92764c7-c2ad-4b3b-bf6b-006b3ca98b2c req-ec541508-e3b9-46fa-839c-9ec93e718d3a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Received event network-vif-plugged-f84fa974-5937-4d9b-8926-ae9b528e5aa9 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:14:57 compute-0 nova_compute[117413]: 2025-10-08 16:14:57.051 2 DEBUG oslo_concurrency.lockutils [req-d92764c7-c2ad-4b3b-bf6b-006b3ca98b2c req-ec541508-e3b9-46fa-839c-9ec93e718d3a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:14:57 compute-0 nova_compute[117413]: 2025-10-08 16:14:57.055 2 DEBUG oslo_concurrency.lockutils [req-d92764c7-c2ad-4b3b-bf6b-006b3ca98b2c req-ec541508-e3b9-46fa-839c-9ec93e718d3a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:14:57 compute-0 nova_compute[117413]: 2025-10-08 16:14:57.056 2 DEBUG oslo_concurrency.lockutils [req-d92764c7-c2ad-4b3b-bf6b-006b3ca98b2c req-ec541508-e3b9-46fa-839c-9ec93e718d3a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:14:57 compute-0 nova_compute[117413]: 2025-10-08 16:14:57.056 2 DEBUG nova.compute.manager [req-d92764c7-c2ad-4b3b-bf6b-006b3ca98b2c req-ec541508-e3b9-46fa-839c-9ec93e718d3a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Processing event network-vif-plugged-f84fa974-5937-4d9b-8926-ae9b528e5aa9 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 08 16:14:57 compute-0 nova_compute[117413]: 2025-10-08 16:14:57.595 2 DEBUG nova.compute.manager [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 08 16:14:57 compute-0 nova_compute[117413]: 2025-10-08 16:14:57.600 2 DEBUG nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 08 16:14:57 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:57.600 28633 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 08 16:14:57 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:57.601 28633 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmptfsxjjq2/privsep.sock __init__ /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:377
Oct 08 16:14:57 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:57.421 141656 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 08 16:14:57 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:57.426 141656 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 08 16:14:57 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:57.427 141656 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 08 16:14:57 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:57.427 141656 INFO oslo.privsep.daemon [-] privsep daemon running as pid 141656
Oct 08 16:14:57 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:57.602 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[df956c2c-b63d-4e00-aae3-c7d3ac120760]: (2,) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:14:57 compute-0 nova_compute[117413]: 2025-10-08 16:14:57.604 2 INFO nova.virt.libvirt.driver [-] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Instance spawned successfully.
Oct 08 16:14:57 compute-0 nova_compute[117413]: 2025-10-08 16:14:57.605 2 DEBUG nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 08 16:14:58 compute-0 nova_compute[117413]: 2025-10-08 16:14:58.119 2 DEBUG nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:14:58 compute-0 nova_compute[117413]: 2025-10-08 16:14:58.120 2 DEBUG nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:14:58 compute-0 nova_compute[117413]: 2025-10-08 16:14:58.120 2 DEBUG nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:14:58 compute-0 nova_compute[117413]: 2025-10-08 16:14:58.121 2 DEBUG nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:14:58 compute-0 nova_compute[117413]: 2025-10-08 16:14:58.121 2 DEBUG nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:14:58 compute-0 nova_compute[117413]: 2025-10-08 16:14:58.121 2 DEBUG nova.virt.libvirt.driver [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:14:58 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:58.128 141656 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:14:58 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:58.128 141656 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:14:58 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:58.129 141656 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:14:58 compute-0 nova_compute[117413]: 2025-10-08 16:14:58.633 2 INFO nova.compute.manager [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Took 14.05 seconds to spawn the instance on the hypervisor.
Oct 08 16:14:58 compute-0 nova_compute[117413]: 2025-10-08 16:14:58.634 2 DEBUG nova.compute.manager [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:14:58 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:58.674 141656 INFO oslo_service.backend [-] Loading backend: eventlet
Oct 08 16:14:58 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:58.679 141656 INFO oslo_service.backend [-] Backend 'eventlet' successfully loaded and cached.
Oct 08 16:14:58 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:58.764 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[d3781e96-4f55-46f3-b44a-43cef404c2ea]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:14:58 compute-0 NetworkManager[1034]: <info>  [1759940098.7701] manager: (tap57288f32-60): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Oct 08 16:14:58 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:58.769 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[526fab3a-0d80-4787-b271-13c64b07409b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:14:58 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:58.808 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[741a98ee-e579-488d-87ea-c4819726b31d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:14:58 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:58.812 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[f30fd588-e854-4e9a-a26b-809f8c8b9a1b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:14:58 compute-0 NetworkManager[1034]: <info>  [1759940098.8381] device (tap57288f32-60): carrier: link connected
Oct 08 16:14:58 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:58.845 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[3a9c878d-f426-42ef-9821-9ae4577aabaa]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:14:58 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:58.862 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[0e14e769-6af3-4f78-b419-5c334f596278]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap57288f32-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:f4:4b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 138752, 'reachable_time': 21971, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 141678, 'error': None, 'target': 'ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:14:58 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:58.881 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[58448460-aec0-4525-89d3-b8903f2a9ccb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe76:f44b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 138752, 'tstamp': 138752}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 141679, 'error': None, 'target': 'ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:14:58 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:58.900 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[86f58dde-4031-4caf-9b6a-d611018b457a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap57288f32-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:f4:4b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 138752, 'reachable_time': 21971, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 141680, 'error': None, 'target': 'ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:14:58 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:58.940 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[2e84dd19-2211-402d-8e24-c59cb9a0ded7]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:59.018 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[4545e7c7-e58a-464e-b384-43fc6d6ca593]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:59.019 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57288f32-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:59.020 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:59.020 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap57288f32-60, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:14:59 compute-0 NetworkManager[1034]: <info>  [1759940099.0233] manager: (tap57288f32-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Oct 08 16:14:59 compute-0 nova_compute[117413]: 2025-10-08 16:14:59.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:14:59 compute-0 kernel: tap57288f32-60: entered promiscuous mode
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:59.026 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap57288f32-60, col_values=(('external_ids', {'iface-id': 'a50b613f-e24b-4ce5-abb2-a0a732213099'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:14:59 compute-0 ovn_controller[19768]: 2025-10-08T16:14:59Z|00044|binding|INFO|Releasing lport a50b613f-e24b-4ce5-abb2-a0a732213099 from this chassis (sb_readonly=0)
Oct 08 16:14:59 compute-0 nova_compute[117413]: 2025-10-08 16:14:59.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:59.042 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[a7cf1cfc-bf7d-4caa-96b9-9804edd2e9f8]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:59.042 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/57288f32-6428-49fa-ad0a-b8df4f9be3d5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/57288f32-6428-49fa-ad0a-b8df4f9be3d5.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:59.043 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/57288f32-6428-49fa-ad0a-b8df4f9be3d5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/57288f32-6428-49fa-ad0a-b8df4f9be3d5.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:59.043 28633 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 57288f32-6428-49fa-ad0a-b8df4f9be3d5 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:59.043 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/57288f32-6428-49fa-ad0a-b8df4f9be3d5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/57288f32-6428-49fa-ad0a-b8df4f9be3d5.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:59.043 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[f5594677-8ba3-4abb-88ca-cd34d5862d9d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:59.044 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/57288f32-6428-49fa-ad0a-b8df4f9be3d5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/57288f32-6428-49fa-ad0a-b8df4f9be3d5.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:59.044 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[4a88f6ef-27ee-4b2f-9b36-558211e83ce4]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:59.045 28633 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]: global
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     log         /dev/log local0 debug
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     log-tag     haproxy-metadata-proxy-57288f32-6428-49fa-ad0a-b8df4f9be3d5
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     user        root
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     group       root
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     maxconn     1024
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     pidfile     /var/lib/neutron/external/pids/57288f32-6428-49fa-ad0a-b8df4f9be3d5.pid.haproxy
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     daemon
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]: defaults
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     log global
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     mode http
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     option httplog
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     option dontlognull
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     option http-server-close
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     option forwardfor
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     retries                 3
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     timeout http-request    30s
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     timeout connect         30s
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     timeout client          32s
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     timeout server          32s
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     timeout http-keep-alive 30s
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]: listen listener
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     bind 169.254.169.254:80
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     server metadata /var/lib/neutron/metadata_proxy
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:     http-request add-header X-OVN-Network-ID 57288f32-6428-49fa-ad0a-b8df4f9be3d5
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 08 16:14:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:14:59.045 28633 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5', 'env', 'PROCESS_TAG=haproxy-57288f32-6428-49fa-ad0a-b8df4f9be3d5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/57288f32-6428-49fa-ad0a-b8df4f9be3d5.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 08 16:14:59 compute-0 nova_compute[117413]: 2025-10-08 16:14:59.105 2 DEBUG nova.compute.manager [req-d8661a19-5534-4738-a0f9-2d8d1d4e8748 req-63badd69-0af0-4593-87f3-716410fcf610 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Received event network-vif-plugged-f84fa974-5937-4d9b-8926-ae9b528e5aa9 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:14:59 compute-0 nova_compute[117413]: 2025-10-08 16:14:59.106 2 DEBUG oslo_concurrency.lockutils [req-d8661a19-5534-4738-a0f9-2d8d1d4e8748 req-63badd69-0af0-4593-87f3-716410fcf610 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:14:59 compute-0 nova_compute[117413]: 2025-10-08 16:14:59.106 2 DEBUG oslo_concurrency.lockutils [req-d8661a19-5534-4738-a0f9-2d8d1d4e8748 req-63badd69-0af0-4593-87f3-716410fcf610 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:14:59 compute-0 nova_compute[117413]: 2025-10-08 16:14:59.106 2 DEBUG oslo_concurrency.lockutils [req-d8661a19-5534-4738-a0f9-2d8d1d4e8748 req-63badd69-0af0-4593-87f3-716410fcf610 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:14:59 compute-0 nova_compute[117413]: 2025-10-08 16:14:59.107 2 DEBUG nova.compute.manager [req-d8661a19-5534-4738-a0f9-2d8d1d4e8748 req-63badd69-0af0-4593-87f3-716410fcf610 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] No waiting events found dispatching network-vif-plugged-f84fa974-5937-4d9b-8926-ae9b528e5aa9 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:14:59 compute-0 nova_compute[117413]: 2025-10-08 16:14:59.107 2 WARNING nova.compute.manager [req-d8661a19-5534-4738-a0f9-2d8d1d4e8748 req-63badd69-0af0-4593-87f3-716410fcf610 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Received unexpected event network-vif-plugged-f84fa974-5937-4d9b-8926-ae9b528e5aa9 for instance with vm_state active and task_state None.
Oct 08 16:14:59 compute-0 nova_compute[117413]: 2025-10-08 16:14:59.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:14:59 compute-0 nova_compute[117413]: 2025-10-08 16:14:59.175 2 INFO nova.compute.manager [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Took 20.14 seconds to build instance.
Oct 08 16:14:59 compute-0 podman[141711]: 2025-10-08 16:14:59.458836581 +0000 UTC m=+0.052766571 container create b41591638b09ed69918afc4e3877070fdf5c44fa1cdc1fc660f8c29724d514a7 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 08 16:14:59 compute-0 systemd[1]: Started libpod-conmon-b41591638b09ed69918afc4e3877070fdf5c44fa1cdc1fc660f8c29724d514a7.scope.
Oct 08 16:14:59 compute-0 podman[141711]: 2025-10-08 16:14:59.432978331 +0000 UTC m=+0.026908341 image pull 1b705be0a2473f9551d4f3571c1e8fc1b0bd84e013684239de53078e70a4b6e3 38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 08 16:14:59 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:14:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e5eb6ee88f949dbb779fa190d536c25f390c3994a854073913a316141547f68/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 08 16:14:59 compute-0 podman[141711]: 2025-10-08 16:14:59.552719159 +0000 UTC m=+0.146649169 container init b41591638b09ed69918afc4e3877070fdf5c44fa1cdc1fc660f8c29724d514a7 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:14:59 compute-0 podman[141711]: 2025-10-08 16:14:59.560117001 +0000 UTC m=+0.154046991 container start b41591638b09ed69918afc4e3877070fdf5c44fa1cdc1fc660f8c29724d514a7 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Oct 08 16:14:59 compute-0 neutron-haproxy-ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5[141731]: [NOTICE]   (141749) : New worker (141751) forked
Oct 08 16:14:59 compute-0 neutron-haproxy-ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5[141731]: [NOTICE]   (141749) : Loading success.
Oct 08 16:14:59 compute-0 podman[141724]: 2025-10-08 16:14:59.596521033 +0000 UTC m=+0.093380244 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Oct 08 16:14:59 compute-0 nova_compute[117413]: 2025-10-08 16:14:59.681 2 DEBUG oslo_concurrency.lockutils [None req-c763281b-24f2-408b-931b-dfabdfbc2483 d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.730s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:14:59 compute-0 podman[127881]: time="2025-10-08T16:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:14:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:14:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3485 "" "Go-http-client/1.1"
Oct 08 16:15:00 compute-0 nova_compute[117413]: 2025-10-08 16:15:00.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:01 compute-0 openstack_network_exporter[130039]: ERROR   16:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:15:01 compute-0 openstack_network_exporter[130039]: ERROR   16:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:15:01 compute-0 openstack_network_exporter[130039]: ERROR   16:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:15:01 compute-0 openstack_network_exporter[130039]: ERROR   16:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:15:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:15:01 compute-0 openstack_network_exporter[130039]: ERROR   16:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:15:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:15:04 compute-0 nova_compute[117413]: 2025-10-08 16:15:04.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:04 compute-0 podman[141760]: 2025-10-08 16:15:04.44829614 +0000 UTC m=+0.054038088 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 08 16:15:05 compute-0 nova_compute[117413]: 2025-10-08 16:15:05.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:07 compute-0 nova_compute[117413]: 2025-10-08 16:15:07.318 2 DEBUG oslo_concurrency.lockutils [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Acquiring lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:15:07 compute-0 nova_compute[117413]: 2025-10-08 16:15:07.318 2 DEBUG oslo_concurrency.lockutils [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:15:07 compute-0 nova_compute[117413]: 2025-10-08 16:15:07.318 2 DEBUG oslo_concurrency.lockutils [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Acquiring lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:15:07 compute-0 nova_compute[117413]: 2025-10-08 16:15:07.319 2 DEBUG oslo_concurrency.lockutils [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:15:07 compute-0 nova_compute[117413]: 2025-10-08 16:15:07.319 2 DEBUG oslo_concurrency.lockutils [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:15:07 compute-0 nova_compute[117413]: 2025-10-08 16:15:07.331 2 INFO nova.compute.manager [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Terminating instance
Oct 08 16:15:07 compute-0 nova_compute[117413]: 2025-10-08 16:15:07.847 2 DEBUG nova.compute.manager [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:15:07 compute-0 kernel: tapf84fa974-59 (unregistering): left promiscuous mode
Oct 08 16:15:07 compute-0 NetworkManager[1034]: <info>  [1759940107.8727] device (tapf84fa974-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:15:07 compute-0 ovn_controller[19768]: 2025-10-08T16:15:07Z|00045|binding|INFO|Releasing lport f84fa974-5937-4d9b-8926-ae9b528e5aa9 from this chassis (sb_readonly=0)
Oct 08 16:15:07 compute-0 nova_compute[117413]: 2025-10-08 16:15:07.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:07 compute-0 ovn_controller[19768]: 2025-10-08T16:15:07Z|00046|binding|INFO|Setting lport f84fa974-5937-4d9b-8926-ae9b528e5aa9 down in Southbound
Oct 08 16:15:07 compute-0 ovn_controller[19768]: 2025-10-08T16:15:07Z|00047|binding|INFO|Removing iface tapf84fa974-59 ovn-installed in OVS
Oct 08 16:15:07 compute-0 nova_compute[117413]: 2025-10-08 16:15:07.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:07 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:07.892 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:81:67 10.100.0.6'], port_security=['fa:16:3e:63:81:67 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f1a00ac6-63aa-402b-b689-ace4f0b21ae0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-57288f32-6428-49fa-ad0a-b8df4f9be3d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acf253db72ef46d79883318c90b63116', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'd5125f3d-1992-4f39-86c4-a5a8059d457e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9abdd97c-a47b-441d-91b7-836dd83a3aee, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=f84fa974-5937-4d9b-8926-ae9b528e5aa9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:15:07 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:07.893 28633 INFO neutron.agent.ovn.metadata.agent [-] Port f84fa974-5937-4d9b-8926-ae9b528e5aa9 in datapath 57288f32-6428-49fa-ad0a-b8df4f9be3d5 unbound from our chassis
Oct 08 16:15:07 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:07.894 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 57288f32-6428-49fa-ad0a-b8df4f9be3d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:15:07 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:07.895 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[5c97e7b8-c8a8-4c0a-bd5e-0ea91fa4afd0]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:15:07 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:07.896 28633 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5 namespace which is not needed anymore
Oct 08 16:15:07 compute-0 nova_compute[117413]: 2025-10-08 16:15:07.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:07 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct 08 16:15:07 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000003.scope: Consumed 10.802s CPU time.
Oct 08 16:15:07 compute-0 systemd-machined[77548]: Machine qemu-1-instance-00000003 terminated.
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.030 2 DEBUG nova.compute.manager [req-0fdcfc3a-18aa-4d2d-8bb7-1825c19d8fb7 req-0cb5559c-2d45-4a41-8e9b-0a6e14d9df9e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Received event network-vif-unplugged-f84fa974-5937-4d9b-8926-ae9b528e5aa9 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.030 2 DEBUG oslo_concurrency.lockutils [req-0fdcfc3a-18aa-4d2d-8bb7-1825c19d8fb7 req-0cb5559c-2d45-4a41-8e9b-0a6e14d9df9e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.031 2 DEBUG oslo_concurrency.lockutils [req-0fdcfc3a-18aa-4d2d-8bb7-1825c19d8fb7 req-0cb5559c-2d45-4a41-8e9b-0a6e14d9df9e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.031 2 DEBUG oslo_concurrency.lockutils [req-0fdcfc3a-18aa-4d2d-8bb7-1825c19d8fb7 req-0cb5559c-2d45-4a41-8e9b-0a6e14d9df9e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.031 2 DEBUG nova.compute.manager [req-0fdcfc3a-18aa-4d2d-8bb7-1825c19d8fb7 req-0cb5559c-2d45-4a41-8e9b-0a6e14d9df9e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] No waiting events found dispatching network-vif-unplugged-f84fa974-5937-4d9b-8926-ae9b528e5aa9 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.031 2 DEBUG nova.compute.manager [req-0fdcfc3a-18aa-4d2d-8bb7-1825c19d8fb7 req-0cb5559c-2d45-4a41-8e9b-0a6e14d9df9e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Received event network-vif-unplugged-f84fa974-5937-4d9b-8926-ae9b528e5aa9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:15:08 compute-0 neutron-haproxy-ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5[141731]: [NOTICE]   (141749) : haproxy version is 3.0.5-8e879a5
Oct 08 16:15:08 compute-0 neutron-haproxy-ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5[141731]: [NOTICE]   (141749) : path to executable is /usr/sbin/haproxy
Oct 08 16:15:08 compute-0 neutron-haproxy-ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5[141731]: [WARNING]  (141749) : Exiting Master process...
Oct 08 16:15:08 compute-0 podman[141805]: 2025-10-08 16:15:08.066904274 +0000 UTC m=+0.049133088 container kill b41591638b09ed69918afc4e3877070fdf5c44fa1cdc1fc660f8c29724d514a7 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, tcib_managed=true)
Oct 08 16:15:08 compute-0 neutron-haproxy-ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5[141731]: [ALERT]    (141749) : Current worker (141751) exited with code 143 (Terminated)
Oct 08 16:15:08 compute-0 neutron-haproxy-ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5[141731]: [WARNING]  (141749) : All workers exited. Exiting... (0)
Oct 08 16:15:08 compute-0 systemd[1]: libpod-b41591638b09ed69918afc4e3877070fdf5c44fa1cdc1fc660f8c29724d514a7.scope: Deactivated successfully.
Oct 08 16:15:08 compute-0 conmon[141731]: conmon b41591638b09ed69918a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b41591638b09ed69918afc4e3877070fdf5c44fa1cdc1fc660f8c29724d514a7.scope/container/memory.events
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:08 compute-0 podman[141822]: 2025-10-08 16:15:08.115481715 +0000 UTC m=+0.023503884 container died b41591638b09ed69918afc4e3877070fdf5c44fa1cdc1fc660f8c29724d514a7 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.126 2 INFO nova.virt.libvirt.driver [-] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Instance destroyed successfully.
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.127 2 DEBUG nova.objects.instance [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lazy-loading 'resources' on Instance uuid f1a00ac6-63aa-402b-b689-ace4f0b21ae0 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:15:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b41591638b09ed69918afc4e3877070fdf5c44fa1cdc1fc660f8c29724d514a7-userdata-shm.mount: Deactivated successfully.
Oct 08 16:15:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e5eb6ee88f949dbb779fa190d536c25f390c3994a854073913a316141547f68-merged.mount: Deactivated successfully.
Oct 08 16:15:08 compute-0 podman[141822]: 2025-10-08 16:15:08.166377852 +0000 UTC m=+0.074400011 container cleanup b41591638b09ed69918afc4e3877070fdf5c44fa1cdc1fc660f8c29724d514a7 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 16:15:08 compute-0 systemd[1]: libpod-conmon-b41591638b09ed69918afc4e3877070fdf5c44fa1cdc1fc660f8c29724d514a7.scope: Deactivated successfully.
Oct 08 16:15:08 compute-0 podman[141827]: 2025-10-08 16:15:08.184223393 +0000 UTC m=+0.077221302 container remove b41591638b09ed69918afc4e3877070fdf5c44fa1cdc1fc660f8c29724d514a7 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Oct 08 16:15:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:08.190 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[57c5e220-dd59-4e79-9534-ede6c6b929f8]: (4, ("Wed Oct  8 04:15:07 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5 (b41591638b09ed69918afc4e3877070fdf5c44fa1cdc1fc660f8c29724d514a7)\nb41591638b09ed69918afc4e3877070fdf5c44fa1cdc1fc660f8c29724d514a7\nWed Oct  8 04:15:08 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5 (b41591638b09ed69918afc4e3877070fdf5c44fa1cdc1fc660f8c29724d514a7)\nb41591638b09ed69918afc4e3877070fdf5c44fa1cdc1fc660f8c29724d514a7\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:15:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:08.192 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[1ebd68bf-d752-450b-ac71-8d6f58067dba]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:15:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:08.192 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/57288f32-6428-49fa-ad0a-b8df4f9be3d5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/57288f32-6428-49fa-ad0a-b8df4f9be3d5.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:15:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:08.193 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[4c8f943d-8b24-4008-9895-f895d27bdb70]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:15:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:08.193 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap57288f32-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:15:08 compute-0 kernel: tap57288f32-60: left promiscuous mode
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:08.212 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[b8095070-d851-4715-836f-ce4e4626416a]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:15:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:08.234 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9759a4-71ca-4346-bfdf-6e76a6002203]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:15:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:08.235 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[daf02f57-0556-47fd-a688-7341672b6464]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:15:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:08.252 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[d08e5b54-2424-4fda-8089-bc740bf04082]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 138744, 'reachable_time': 39088, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 141876, 'error': None, 'target': 'ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:15:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:08.257 28777 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-57288f32-6428-49fa-ad0a-b8df4f9be3d5 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 08 16:15:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:08.257 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[4d3438c9-2eb5-4194-9485-69bb9eaa6a42]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:15:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d57288f32\x2d6428\x2d49fa\x2dad0a\x2db8df4f9be3d5.mount: Deactivated successfully.
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.633 2 DEBUG nova.virt.libvirt.vif [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-08T16:14:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestDataModel-server-1960118664',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testdatamodel-server-1960118664',id=3,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:14:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='acf253db72ef46d79883318c90b63116',ramdisk_id='',reservation_id='r-rczcegaq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='manager,admin,member,reader',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestDataModel-967797023',owner_user_name='tempest-TestDataModel-967797023-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:14:58Z,user_data=None,user_id='d8bc1ce88c7f41a6b2239ab0d22c0a88',uuid=f1a00ac6-63aa-402b-b689-ace4f0b21ae0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f84fa974-5937-4d9b-8926-ae9b528e5aa9", "address": "fa:16:3e:63:81:67", "network": {"id": "57288f32-6428-49fa-ad0a-b8df4f9be3d5", "bridge": "br-int", "label": "tempest-TestDataModel-776922827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e161b3cceb84b36879e8add64d97803", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf84fa974-59", "ovs_interfaceid": "f84fa974-5937-4d9b-8926-ae9b528e5aa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.634 2 DEBUG nova.network.os_vif_util [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Converting VIF {"id": "f84fa974-5937-4d9b-8926-ae9b528e5aa9", "address": "fa:16:3e:63:81:67", "network": {"id": "57288f32-6428-49fa-ad0a-b8df4f9be3d5", "bridge": "br-int", "label": "tempest-TestDataModel-776922827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e161b3cceb84b36879e8add64d97803", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf84fa974-59", "ovs_interfaceid": "f84fa974-5937-4d9b-8926-ae9b528e5aa9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.635 2 DEBUG nova.network.os_vif_util [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:81:67,bridge_name='br-int',has_traffic_filtering=True,id=f84fa974-5937-4d9b-8926-ae9b528e5aa9,network=Network(57288f32-6428-49fa-ad0a-b8df4f9be3d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf84fa974-59') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.635 2 DEBUG os_vif [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:81:67,bridge_name='br-int',has_traffic_filtering=True,id=f84fa974-5937-4d9b-8926-ae9b528e5aa9,network=Network(57288f32-6428-49fa-ad0a-b8df4f9be3d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf84fa974-59') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.638 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf84fa974-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.644 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=bed83f42-71f1-432b-93e3-c7fbe65a4b75) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.649 2 INFO os_vif [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:81:67,bridge_name='br-int',has_traffic_filtering=True,id=f84fa974-5937-4d9b-8926-ae9b528e5aa9,network=Network(57288f32-6428-49fa-ad0a-b8df4f9be3d5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf84fa974-59')
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.649 2 INFO nova.virt.libvirt.driver [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Deleting instance files /var/lib/nova/instances/f1a00ac6-63aa-402b-b689-ace4f0b21ae0_del
Oct 08 16:15:08 compute-0 nova_compute[117413]: 2025-10-08 16:15:08.650 2 INFO nova.virt.libvirt.driver [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Deletion of /var/lib/nova/instances/f1a00ac6-63aa-402b-b689-ace4f0b21ae0_del complete
Oct 08 16:15:09 compute-0 nova_compute[117413]: 2025-10-08 16:15:09.164 2 INFO nova.compute.manager [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Took 1.32 seconds to destroy the instance on the hypervisor.
Oct 08 16:15:09 compute-0 nova_compute[117413]: 2025-10-08 16:15:09.164 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:15:09 compute-0 nova_compute[117413]: 2025-10-08 16:15:09.165 2 DEBUG nova.compute.manager [-] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:15:09 compute-0 nova_compute[117413]: 2025-10-08 16:15:09.165 2 DEBUG nova.network.neutron [-] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:15:09 compute-0 nova_compute[117413]: 2025-10-08 16:15:09.166 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:15:09 compute-0 podman[141879]: 2025-10-08 16:15:09.483707874 +0000 UTC m=+0.084646394 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 08 16:15:09 compute-0 podman[141880]: 2025-10-08 16:15:09.512113657 +0000 UTC m=+0.109671881 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0)
Oct 08 16:15:09 compute-0 nova_compute[117413]: 2025-10-08 16:15:09.755 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:15:10 compute-0 nova_compute[117413]: 2025-10-08 16:15:10.490 2 DEBUG nova.compute.manager [req-8b675449-f8be-43f7-a598-2d9cccbe00cb req-3b3af614-d333-476a-b7de-e1133955cb36 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Received event network-vif-unplugged-f84fa974-5937-4d9b-8926-ae9b528e5aa9 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:15:10 compute-0 nova_compute[117413]: 2025-10-08 16:15:10.491 2 DEBUG oslo_concurrency.lockutils [req-8b675449-f8be-43f7-a598-2d9cccbe00cb req-3b3af614-d333-476a-b7de-e1133955cb36 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:15:10 compute-0 nova_compute[117413]: 2025-10-08 16:15:10.491 2 DEBUG oslo_concurrency.lockutils [req-8b675449-f8be-43f7-a598-2d9cccbe00cb req-3b3af614-d333-476a-b7de-e1133955cb36 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:15:10 compute-0 nova_compute[117413]: 2025-10-08 16:15:10.491 2 DEBUG oslo_concurrency.lockutils [req-8b675449-f8be-43f7-a598-2d9cccbe00cb req-3b3af614-d333-476a-b7de-e1133955cb36 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:15:10 compute-0 nova_compute[117413]: 2025-10-08 16:15:10.492 2 DEBUG nova.compute.manager [req-8b675449-f8be-43f7-a598-2d9cccbe00cb req-3b3af614-d333-476a-b7de-e1133955cb36 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] No waiting events found dispatching network-vif-unplugged-f84fa974-5937-4d9b-8926-ae9b528e5aa9 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:15:10 compute-0 nova_compute[117413]: 2025-10-08 16:15:10.492 2 DEBUG nova.compute.manager [req-8b675449-f8be-43f7-a598-2d9cccbe00cb req-3b3af614-d333-476a-b7de-e1133955cb36 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Received event network-vif-unplugged-f84fa974-5937-4d9b-8926-ae9b528e5aa9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:15:10 compute-0 nova_compute[117413]: 2025-10-08 16:15:10.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:10 compute-0 nova_compute[117413]: 2025-10-08 16:15:10.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:10 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:10.834 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:15:10 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:10.835 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:15:11 compute-0 nova_compute[117413]: 2025-10-08 16:15:11.066 2 DEBUG nova.compute.manager [req-75b75f30-ef6b-4698-8b54-0d9f8b5d1c3d req-25013835-ed7d-4f1e-bd84-ecdbf50c32a8 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Received event network-vif-deleted-f84fa974-5937-4d9b-8926-ae9b528e5aa9 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:15:11 compute-0 nova_compute[117413]: 2025-10-08 16:15:11.067 2 INFO nova.compute.manager [req-75b75f30-ef6b-4698-8b54-0d9f8b5d1c3d req-25013835-ed7d-4f1e-bd84-ecdbf50c32a8 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Neutron deleted interface f84fa974-5937-4d9b-8926-ae9b528e5aa9; detaching it from the instance and deleting it from the info cache
Oct 08 16:15:11 compute-0 nova_compute[117413]: 2025-10-08 16:15:11.067 2 DEBUG nova.network.neutron [req-75b75f30-ef6b-4698-8b54-0d9f8b5d1c3d req-25013835-ed7d-4f1e-bd84-ecdbf50c32a8 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:15:11 compute-0 nova_compute[117413]: 2025-10-08 16:15:11.486 2 DEBUG nova.network.neutron [-] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:15:11 compute-0 nova_compute[117413]: 2025-10-08 16:15:11.703 2 DEBUG nova.compute.manager [req-75b75f30-ef6b-4698-8b54-0d9f8b5d1c3d req-25013835-ed7d-4f1e-bd84-ecdbf50c32a8 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Detach interface failed, port_id=f84fa974-5937-4d9b-8926-ae9b528e5aa9, reason: Instance f1a00ac6-63aa-402b-b689-ace4f0b21ae0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 08 16:15:12 compute-0 nova_compute[117413]: 2025-10-08 16:15:12.002 2 INFO nova.compute.manager [-] [instance: f1a00ac6-63aa-402b-b689-ace4f0b21ae0] Took 2.84 seconds to deallocate network for instance.
Oct 08 16:15:12 compute-0 nova_compute[117413]: 2025-10-08 16:15:12.682 2 DEBUG oslo_concurrency.lockutils [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:15:12 compute-0 nova_compute[117413]: 2025-10-08 16:15:12.683 2 DEBUG oslo_concurrency.lockutils [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:15:12 compute-0 nova_compute[117413]: 2025-10-08 16:15:12.751 2 DEBUG nova.compute.provider_tree [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:15:13 compute-0 nova_compute[117413]: 2025-10-08 16:15:13.286 2 DEBUG nova.scheduler.client.report [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:15:13 compute-0 nova_compute[117413]: 2025-10-08 16:15:13.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:13 compute-0 nova_compute[117413]: 2025-10-08 16:15:13.861 2 DEBUG oslo_concurrency.lockutils [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.178s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:15:14 compute-0 nova_compute[117413]: 2025-10-08 16:15:14.018 2 INFO nova.scheduler.client.report [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Deleted allocations for instance f1a00ac6-63aa-402b-b689-ace4f0b21ae0
Oct 08 16:15:15 compute-0 nova_compute[117413]: 2025-10-08 16:15:15.150 2 DEBUG oslo_concurrency.lockutils [None req-bccda2a2-6028-4698-99be-83312b8f189e d8bc1ce88c7f41a6b2239ab0d22c0a88 acf253db72ef46d79883318c90b63116 - - default default] Lock "f1a00ac6-63aa-402b-b689-ace4f0b21ae0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.831s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:15:15 compute-0 nova_compute[117413]: 2025-10-08 16:15:15.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:18 compute-0 nova_compute[117413]: 2025-10-08 16:15:18.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:18.836 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:15:20 compute-0 nova_compute[117413]: 2025-10-08 16:15:20.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:21 compute-0 podman[141929]: 2025-10-08 16:15:21.439992909 +0000 UTC m=+0.051241178 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 08 16:15:23 compute-0 nova_compute[117413]: 2025-10-08 16:15:23.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:24 compute-0 podman[141949]: 2025-10-08 16:15:24.450984307 +0000 UTC m=+0.058663661 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.33.7, config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, maintainer=Red Hat, Inc.)
Oct 08 16:15:25 compute-0 nova_compute[117413]: 2025-10-08 16:15:25.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:28 compute-0 nova_compute[117413]: 2025-10-08 16:15:28.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:29 compute-0 podman[127881]: time="2025-10-08T16:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:15:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:15:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3020 "" "Go-http-client/1.1"
Oct 08 16:15:30 compute-0 podman[141971]: 2025-10-08 16:15:30.451327565 +0000 UTC m=+0.056187049 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 08 16:15:30 compute-0 nova_compute[117413]: 2025-10-08 16:15:30.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:31 compute-0 openstack_network_exporter[130039]: ERROR   16:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:15:31 compute-0 openstack_network_exporter[130039]: ERROR   16:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:15:31 compute-0 openstack_network_exporter[130039]: ERROR   16:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:15:31 compute-0 openstack_network_exporter[130039]: ERROR   16:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:15:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:15:31 compute-0 openstack_network_exporter[130039]: ERROR   16:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:15:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:15:33 compute-0 nova_compute[117413]: 2025-10-08 16:15:33.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:35 compute-0 podman[141991]: 2025-10-08 16:15:35.450304374 +0000 UTC m=+0.054399448 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 16:15:35 compute-0 nova_compute[117413]: 2025-10-08 16:15:35.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:38 compute-0 nova_compute[117413]: 2025-10-08 16:15:38.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:40 compute-0 podman[142011]: 2025-10-08 16:15:40.443233995 +0000 UTC m=+0.050500516 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:15:40 compute-0 podman[142012]: 2025-10-08 16:15:40.487774959 +0000 UTC m=+0.088783221 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251007, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 08 16:15:40 compute-0 nova_compute[117413]: 2025-10-08 16:15:40.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:41.882 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:15:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:41.882 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:15:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:41.882 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:15:43 compute-0 nova_compute[117413]: 2025-10-08 16:15:43.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:43 compute-0 nova_compute[117413]: 2025-10-08 16:15:43.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:45 compute-0 nova_compute[117413]: 2025-10-08 16:15:45.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:48 compute-0 nova_compute[117413]: 2025-10-08 16:15:48.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:50 compute-0 nova_compute[117413]: 2025-10-08 16:15:50.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:51 compute-0 nova_compute[117413]: 2025-10-08 16:15:51.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:15:51 compute-0 nova_compute[117413]: 2025-10-08 16:15:51.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:15:52 compute-0 nova_compute[117413]: 2025-10-08 16:15:52.357 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:15:52 compute-0 nova_compute[117413]: 2025-10-08 16:15:52.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:15:52 compute-0 podman[142063]: 2025-10-08 16:15:52.494100323 +0000 UTC m=+0.093040412 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251007, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 08 16:15:52 compute-0 nova_compute[117413]: 2025-10-08 16:15:52.879 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:15:52 compute-0 nova_compute[117413]: 2025-10-08 16:15:52.880 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:15:52 compute-0 nova_compute[117413]: 2025-10-08 16:15:52.880 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:15:52 compute-0 nova_compute[117413]: 2025-10-08 16:15:52.880 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:15:53 compute-0 nova_compute[117413]: 2025-10-08 16:15:53.129 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:15:53 compute-0 nova_compute[117413]: 2025-10-08 16:15:53.130 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:15:53 compute-0 nova_compute[117413]: 2025-10-08 16:15:53.147 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.017s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:15:53 compute-0 nova_compute[117413]: 2025-10-08 16:15:53.148 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6196MB free_disk=73.27291870117188GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:15:53 compute-0 nova_compute[117413]: 2025-10-08 16:15:53.148 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:15:53 compute-0 nova_compute[117413]: 2025-10-08 16:15:53.148 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:15:53 compute-0 nova_compute[117413]: 2025-10-08 16:15:53.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:54 compute-0 nova_compute[117413]: 2025-10-08 16:15:54.200 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:15:54 compute-0 nova_compute[117413]: 2025-10-08 16:15:54.200 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:15:53 up 24 min,  0 user,  load average: 0.13, 0.18, 0.29\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:15:54 compute-0 nova_compute[117413]: 2025-10-08 16:15:54.218 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:15:54 compute-0 nova_compute[117413]: 2025-10-08 16:15:54.723 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:15:55 compute-0 nova_compute[117413]: 2025-10-08 16:15:55.232 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:15:55 compute-0 nova_compute[117413]: 2025-10-08 16:15:55.233 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.084s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:15:55 compute-0 podman[142086]: 2025-10-08 16:15:55.443820682 +0000 UTC m=+0.053369606 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 08 16:15:55 compute-0 nova_compute[117413]: 2025-10-08 16:15:55.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:56 compute-0 nova_compute[117413]: 2025-10-08 16:15:56.234 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:15:56 compute-0 nova_compute[117413]: 2025-10-08 16:15:56.234 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:15:56 compute-0 nova_compute[117413]: 2025-10-08 16:15:56.234 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:15:56 compute-0 nova_compute[117413]: 2025-10-08 16:15:56.235 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:15:56 compute-0 nova_compute[117413]: 2025-10-08 16:15:56.235 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:15:58 compute-0 nova_compute[117413]: 2025-10-08 16:15:58.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:15:59 compute-0 nova_compute[117413]: 2025-10-08 16:15:59.358 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:15:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:59.631 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:0b:79 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3644951a011848ec91c55840c9a66158', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c20bcb7-facc-40e4-a92a-7c3dfec236b2, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=bc02923c-7f95-45ae-9ad1-1ed85859f940) old=Port_Binding(mac=['fa:16:3e:31:0b:79'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3644951a011848ec91c55840c9a66158', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:15:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:59.632 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port bc02923c-7f95-45ae-9ad1-1ed85859f940 in datapath cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b updated
Oct 08 16:15:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:59.632 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:15:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:15:59.634 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[0714fe07-421e-478d-b4aa-534c5b0259cb]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:15:59 compute-0 podman[127881]: time="2025-10-08T16:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:15:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:15:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3024 "" "Go-http-client/1.1"
Oct 08 16:16:00 compute-0 nova_compute[117413]: 2025-10-08 16:16:00.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:01 compute-0 openstack_network_exporter[130039]: ERROR   16:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:16:01 compute-0 openstack_network_exporter[130039]: ERROR   16:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:16:01 compute-0 openstack_network_exporter[130039]: ERROR   16:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:16:01 compute-0 openstack_network_exporter[130039]: ERROR   16:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:16:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:16:01 compute-0 openstack_network_exporter[130039]: ERROR   16:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:16:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:16:01 compute-0 podman[142107]: 2025-10-08 16:16:01.443920885 +0000 UTC m=+0.052856842 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 08 16:16:03 compute-0 nova_compute[117413]: 2025-10-08 16:16:03.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:05 compute-0 nova_compute[117413]: 2025-10-08 16:16:05.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:06 compute-0 podman[142128]: 2025-10-08 16:16:06.451814291 +0000 UTC m=+0.056874906 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:16:08 compute-0 nova_compute[117413]: 2025-10-08 16:16:08.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:10 compute-0 nova_compute[117413]: 2025-10-08 16:16:10.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:11 compute-0 podman[142147]: 2025-10-08 16:16:11.454881652 +0000 UTC m=+0.061274681 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 08 16:16:11 compute-0 podman[142148]: 2025-10-08 16:16:11.503061189 +0000 UTC m=+0.097269012 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 08 16:16:11 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:16:11.834 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:8e:09 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-5d653cc6-d6b4-4cb0-982c-704d21f8681e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d653cc6-d6b4-4cb0-982c-704d21f8681e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36f986860cbf4338bf6afd8aa7b4d147', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd09e426-51f3-4efb-b439-ebcb72c778fe, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d5846df2-fbfe-495f-938d-4f30033afb6c) old=Port_Binding(mac=['fa:16:3e:0a:8e:09'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-5d653cc6-d6b4-4cb0-982c-704d21f8681e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d653cc6-d6b4-4cb0-982c-704d21f8681e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36f986860cbf4338bf6afd8aa7b4d147', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:16:11 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:16:11.835 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d5846df2-fbfe-495f-938d-4f30033afb6c in datapath 5d653cc6-d6b4-4cb0-982c-704d21f8681e updated
Oct 08 16:16:11 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:16:11.835 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d653cc6-d6b4-4cb0-982c-704d21f8681e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:16:11 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:16:11.836 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[1902e3a8-c6cc-460d-883c-064a88f4fa91]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:16:13 compute-0 nova_compute[117413]: 2025-10-08 16:16:13.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:15 compute-0 nova_compute[117413]: 2025-10-08 16:16:15.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:16:16.890 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:16:16 compute-0 nova_compute[117413]: 2025-10-08 16:16:16.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:16:16.892 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:16:17 compute-0 ovn_controller[19768]: 2025-10-08T16:16:17Z|00048|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 08 16:16:18 compute-0 nova_compute[117413]: 2025-10-08 16:16:18.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:20 compute-0 nova_compute[117413]: 2025-10-08 16:16:20.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:21 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:16:21.894 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:16:23 compute-0 podman[142200]: 2025-10-08 16:16:23.502947899 +0000 UTC m=+0.093960038 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 16:16:23 compute-0 nova_compute[117413]: 2025-10-08 16:16:23.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:25 compute-0 nova_compute[117413]: 2025-10-08 16:16:25.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:26 compute-0 podman[142221]: 2025-10-08 16:16:26.464685708 +0000 UTC m=+0.070485002 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 08 16:16:28 compute-0 nova_compute[117413]: 2025-10-08 16:16:28.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:29 compute-0 podman[127881]: time="2025-10-08T16:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:16:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:16:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3021 "" "Go-http-client/1.1"
Oct 08 16:16:30 compute-0 nova_compute[117413]: 2025-10-08 16:16:30.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:31 compute-0 openstack_network_exporter[130039]: ERROR   16:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:16:31 compute-0 openstack_network_exporter[130039]: ERROR   16:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:16:31 compute-0 openstack_network_exporter[130039]: ERROR   16:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:16:31 compute-0 openstack_network_exporter[130039]: ERROR   16:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:16:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:16:31 compute-0 openstack_network_exporter[130039]: ERROR   16:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:16:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:16:32 compute-0 podman[142242]: 2025-10-08 16:16:32.444992391 +0000 UTC m=+0.056232048 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:16:33 compute-0 nova_compute[117413]: 2025-10-08 16:16:33.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:35 compute-0 nova_compute[117413]: 2025-10-08 16:16:35.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:37 compute-0 podman[142263]: 2025-10-08 16:16:37.467218885 +0000 UTC m=+0.068208487 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:16:38 compute-0 nova_compute[117413]: 2025-10-08 16:16:38.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:40 compute-0 nova_compute[117413]: 2025-10-08 16:16:40.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:16:41.883 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:16:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:16:41.884 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:16:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:16:41.884 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:16:42 compute-0 podman[142283]: 2025-10-08 16:16:42.486377071 +0000 UTC m=+0.079997152 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:16:42 compute-0 podman[142284]: 2025-10-08 16:16:42.54058964 +0000 UTC m=+0.135708763 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:16:43 compute-0 nova_compute[117413]: 2025-10-08 16:16:43.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:45 compute-0 nova_compute[117413]: 2025-10-08 16:16:45.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:48 compute-0 nova_compute[117413]: 2025-10-08 16:16:48.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:48 compute-0 nova_compute[117413]: 2025-10-08 16:16:48.752 2 DEBUG oslo_concurrency.lockutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:16:48 compute-0 nova_compute[117413]: 2025-10-08 16:16:48.753 2 DEBUG oslo_concurrency.lockutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:16:49 compute-0 nova_compute[117413]: 2025-10-08 16:16:49.258 2 DEBUG nova.compute.manager [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 08 16:16:49 compute-0 nova_compute[117413]: 2025-10-08 16:16:49.813 2 DEBUG oslo_concurrency.lockutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:16:49 compute-0 nova_compute[117413]: 2025-10-08 16:16:49.814 2 DEBUG oslo_concurrency.lockutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:16:49 compute-0 nova_compute[117413]: 2025-10-08 16:16:49.819 2 DEBUG nova.virt.hardware [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 08 16:16:49 compute-0 nova_compute[117413]: 2025-10-08 16:16:49.819 2 INFO nova.compute.claims [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Claim successful on node compute-0.ctlplane.example.com
Oct 08 16:16:50 compute-0 nova_compute[117413]: 2025-10-08 16:16:50.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:50 compute-0 nova_compute[117413]: 2025-10-08 16:16:50.890 2 DEBUG nova.compute.provider_tree [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:16:51 compute-0 nova_compute[117413]: 2025-10-08 16:16:51.399 2 DEBUG nova.scheduler.client.report [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:16:51 compute-0 nova_compute[117413]: 2025-10-08 16:16:51.912 2 DEBUG oslo_concurrency.lockutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.098s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:16:51 compute-0 nova_compute[117413]: 2025-10-08 16:16:51.912 2 DEBUG nova.compute.manager [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 08 16:16:52 compute-0 nova_compute[117413]: 2025-10-08 16:16:52.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:16:52 compute-0 nova_compute[117413]: 2025-10-08 16:16:52.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:16:52 compute-0 nova_compute[117413]: 2025-10-08 16:16:52.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:16:52 compute-0 nova_compute[117413]: 2025-10-08 16:16:52.430 2 DEBUG nova.compute.manager [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 08 16:16:52 compute-0 nova_compute[117413]: 2025-10-08 16:16:52.431 2 DEBUG nova.network.neutron [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 08 16:16:52 compute-0 nova_compute[117413]: 2025-10-08 16:16:52.431 2 WARNING neutronclient.v2_0.client [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:16:52 compute-0 nova_compute[117413]: 2025-10-08 16:16:52.431 2 WARNING neutronclient.v2_0.client [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:16:52 compute-0 nova_compute[117413]: 2025-10-08 16:16:52.873 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:16:52 compute-0 nova_compute[117413]: 2025-10-08 16:16:52.873 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:16:52 compute-0 nova_compute[117413]: 2025-10-08 16:16:52.873 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:16:52 compute-0 nova_compute[117413]: 2025-10-08 16:16:52.874 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:16:52 compute-0 nova_compute[117413]: 2025-10-08 16:16:52.938 2 INFO nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 08 16:16:53 compute-0 nova_compute[117413]: 2025-10-08 16:16:53.024 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:16:53 compute-0 nova_compute[117413]: 2025-10-08 16:16:53.025 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:16:53 compute-0 nova_compute[117413]: 2025-10-08 16:16:53.046 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.021s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:16:53 compute-0 nova_compute[117413]: 2025-10-08 16:16:53.047 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6214MB free_disk=73.27289962768555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:16:53 compute-0 nova_compute[117413]: 2025-10-08 16:16:53.047 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:16:53 compute-0 nova_compute[117413]: 2025-10-08 16:16:53.047 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:16:53 compute-0 nova_compute[117413]: 2025-10-08 16:16:53.264 2 DEBUG nova.network.neutron [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Successfully created port: 68bf22e3-50d7-4692-9dab-6dd3ad5df0cd _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 08 16:16:53 compute-0 nova_compute[117413]: 2025-10-08 16:16:53.445 2 DEBUG nova.compute.manager [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 08 16:16:53 compute-0 nova_compute[117413]: 2025-10-08 16:16:53.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.078 2 DEBUG nova.network.neutron [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Successfully updated port: 68bf22e3-50d7-4692-9dab-6dd3ad5df0cd _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.137 2 DEBUG nova.compute.manager [req-0bc4dfbe-b2b6-4fa8-95ba-b09ad8703deb req-014dc6d2-f430-4930-97ed-c4c73368a0cb c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Received event network-changed-68bf22e3-50d7-4692-9dab-6dd3ad5df0cd external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.137 2 DEBUG nova.compute.manager [req-0bc4dfbe-b2b6-4fa8-95ba-b09ad8703deb req-014dc6d2-f430-4930-97ed-c4c73368a0cb c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Refreshing instance network info cache due to event network-changed-68bf22e3-50d7-4692-9dab-6dd3ad5df0cd. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.138 2 DEBUG oslo_concurrency.lockutils [req-0bc4dfbe-b2b6-4fa8-95ba-b09ad8703deb req-014dc6d2-f430-4930-97ed-c4c73368a0cb c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-0f6f8aa7-8a43-4471-afed-4203d5b80b4c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.138 2 DEBUG oslo_concurrency.lockutils [req-0bc4dfbe-b2b6-4fa8-95ba-b09ad8703deb req-014dc6d2-f430-4930-97ed-c4c73368a0cb c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-0f6f8aa7-8a43-4471-afed-4203d5b80b4c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.138 2 DEBUG nova.network.neutron [req-0bc4dfbe-b2b6-4fa8-95ba-b09ad8703deb req-014dc6d2-f430-4930-97ed-c4c73368a0cb c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Refreshing network info cache for port 68bf22e3-50d7-4692-9dab-6dd3ad5df0cd _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.239 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance 0f6f8aa7-8a43-4471-afed-4203d5b80b4c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.239 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.240 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:16:53 up 25 min,  0 user,  load average: 0.05, 0.14, 0.26\n', 'num_instances': '1', 'num_vm_building': '1', 'num_task_networking': '1', 'num_os_type_None': '1', 'num_proj_36f986860cbf4338bf6afd8aa7b4d147': '1', 'io_workload': '1'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.273 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.466 2 DEBUG nova.compute.manager [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.468 2 DEBUG nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.469 2 INFO nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Creating image(s)
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.470 2 DEBUG oslo_concurrency.lockutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "/var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.470 2 DEBUG oslo_concurrency.lockutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "/var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.472 2 DEBUG oslo_concurrency.lockutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "/var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.473 2 DEBUG oslo_utils.imageutils.format_inspector [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.480 2 DEBUG oslo_utils.imageutils.format_inspector [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.483 2 DEBUG oslo_concurrency.processutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:16:54 compute-0 podman[142331]: 2025-10-08 16:16:54.506137165 +0000 UTC m=+0.097106957 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.550 2 DEBUG oslo_concurrency.processutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.552 2 DEBUG oslo_concurrency.lockutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.553 2 DEBUG oslo_concurrency.lockutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.554 2 DEBUG oslo_utils.imageutils.format_inspector [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.558 2 DEBUG oslo_utils.imageutils.format_inspector [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.559 2 DEBUG oslo_concurrency.processutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.585 2 DEBUG oslo_concurrency.lockutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "refresh_cache-0f6f8aa7-8a43-4471-afed-4203d5b80b4c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.617 2 DEBUG oslo_concurrency.processutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.618 2 DEBUG oslo_concurrency.processutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.651 2 WARNING neutronclient.v2_0.client [req-0bc4dfbe-b2b6-4fa8-95ba-b09ad8703deb req-014dc6d2-f430-4930-97ed-c4c73368a0cb c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.662 2 DEBUG oslo_concurrency.processutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.663 2 DEBUG oslo_concurrency.lockutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.110s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.663 2 DEBUG oslo_concurrency.processutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.725 2 DEBUG oslo_concurrency.processutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.726 2 DEBUG nova.virt.disk.api [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Checking if we can resize image /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.726 2 DEBUG oslo_concurrency.processutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.782 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.789 2 DEBUG oslo_concurrency.processutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.789 2 DEBUG nova.virt.disk.api [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Cannot resize image /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.790 2 DEBUG nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.790 2 DEBUG nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Ensure instance console log exists: /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.790 2 DEBUG oslo_concurrency.lockutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.791 2 DEBUG oslo_concurrency.lockutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.791 2 DEBUG oslo_concurrency.lockutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.794 2 DEBUG nova.network.neutron [req-0bc4dfbe-b2b6-4fa8-95ba-b09ad8703deb req-014dc6d2-f430-4930-97ed-c4c73368a0cb c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:16:54 compute-0 nova_compute[117413]: 2025-10-08 16:16:54.943 2 DEBUG nova.network.neutron [req-0bc4dfbe-b2b6-4fa8-95ba-b09ad8703deb req-014dc6d2-f430-4930-97ed-c4c73368a0cb c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:16:55 compute-0 nova_compute[117413]: 2025-10-08 16:16:55.298 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:16:55 compute-0 nova_compute[117413]: 2025-10-08 16:16:55.299 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.251s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:16:55 compute-0 nova_compute[117413]: 2025-10-08 16:16:55.450 2 DEBUG oslo_concurrency.lockutils [req-0bc4dfbe-b2b6-4fa8-95ba-b09ad8703deb req-014dc6d2-f430-4930-97ed-c4c73368a0cb c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-0f6f8aa7-8a43-4471-afed-4203d5b80b4c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:16:55 compute-0 nova_compute[117413]: 2025-10-08 16:16:55.451 2 DEBUG oslo_concurrency.lockutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquired lock "refresh_cache-0f6f8aa7-8a43-4471-afed-4203d5b80b4c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:16:55 compute-0 nova_compute[117413]: 2025-10-08 16:16:55.451 2 DEBUG nova.network.neutron [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:16:55 compute-0 nova_compute[117413]: 2025-10-08 16:16:55.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:56 compute-0 nova_compute[117413]: 2025-10-08 16:16:56.459 2 DEBUG nova.network.neutron [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:16:56 compute-0 nova_compute[117413]: 2025-10-08 16:16:56.637 2 WARNING neutronclient.v2_0.client [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:16:56 compute-0 nova_compute[117413]: 2025-10-08 16:16:56.778 2 DEBUG nova.network.neutron [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Updating instance_info_cache with network_info: [{"id": "68bf22e3-50d7-4692-9dab-6dd3ad5df0cd", "address": "fa:16:3e:16:9c:8c", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68bf22e3-50", "ovs_interfaceid": "68bf22e3-50d7-4692-9dab-6dd3ad5df0cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.286 2 DEBUG oslo_concurrency.lockutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Releasing lock "refresh_cache-0f6f8aa7-8a43-4471-afed-4203d5b80b4c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.287 2 DEBUG nova.compute.manager [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Instance network_info: |[{"id": "68bf22e3-50d7-4692-9dab-6dd3ad5df0cd", "address": "fa:16:3e:16:9c:8c", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68bf22e3-50", "ovs_interfaceid": "68bf22e3-50d7-4692-9dab-6dd3ad5df0cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.288 2 DEBUG nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Start _get_guest_xml network_info=[{"id": "68bf22e3-50d7-4692-9dab-6dd3ad5df0cd", "address": "fa:16:3e:16:9c:8c", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68bf22e3-50", "ovs_interfaceid": "68bf22e3-50d7-4692-9dab-6dd3ad5df0cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'boot_index': 0, 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': '44390e9d-4b05-4916-9ba9-97b19c79ef43'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.292 2 WARNING nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.294 2 DEBUG nova.virt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='44390e9d-4b05-4916-9ba9-97b19c79ef43', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteActionsViaActuator-server-1268526964', uuid='0f6f8aa7-8a43-4471-afed-4203d5b80b4c'), owner=OwnerMeta(userid='723962be4e3d48efb441d80077ac4263', username='tempest-TestExecuteActionsViaActuator-898376163-project-admin', projectid='36f986860cbf4338bf6afd8aa7b4d147', projectname='tempest-TestExecuteActionsViaActuator-898376163'), image=ImageMeta(id='44390e9d-4b05-4916-9ba9-97b19c79ef43', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='43cd5d45-bd07-4889-a671-dd23291090c1', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "68bf22e3-50d7-4692-9dab-6dd3ad5df0cd", "address": "fa:16:3e:16:9c:8c", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68bf22e3-50", "ovs_interfaceid": "68bf22e3-50d7-4692-9dab-6dd3ad5df0cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008114656.23cad1d.el10', creation_time=1759940217.2940586) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.294 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.295 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.295 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.295 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.295 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.295 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.299 2 DEBUG nova.virt.libvirt.host [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.299 2 DEBUG nova.virt.libvirt.host [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.302 2 DEBUG nova.virt.libvirt.host [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.303 2 DEBUG nova.virt.libvirt.host [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.303 2 DEBUG nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.303 2 DEBUG nova.virt.hardware [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-08T16:08:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='43cd5d45-bd07-4889-a671-dd23291090c1',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.304 2 DEBUG nova.virt.hardware [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.304 2 DEBUG nova.virt.hardware [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.304 2 DEBUG nova.virt.hardware [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.304 2 DEBUG nova.virt.hardware [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.304 2 DEBUG nova.virt.hardware [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.304 2 DEBUG nova.virt.hardware [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.305 2 DEBUG nova.virt.hardware [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.305 2 DEBUG nova.virt.hardware [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.305 2 DEBUG nova.virt.hardware [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.305 2 DEBUG nova.virt.hardware [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.308 2 DEBUG nova.virt.libvirt.vif [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:16:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1268526964',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1268526964',id=5,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36f986860cbf4338bf6afd8aa7b4d147',ramdisk_id='',reservation_id='r-uc10fk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-898376163',owner_user_name='tempest-TestExecuteActionsViaActuator-898376163-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:16:53Z,user_data=None,user_id='723962be4e3d48efb441d80077ac4263',uuid=0f6f8aa7-8a43-4471-afed-4203d5b80b4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68bf22e3-50d7-4692-9dab-6dd3ad5df0cd", "address": "fa:16:3e:16:9c:8c", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68bf22e3-50", "ovs_interfaceid": "68bf22e3-50d7-4692-9dab-6dd3ad5df0cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.309 2 DEBUG nova.network.os_vif_util [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converting VIF {"id": "68bf22e3-50d7-4692-9dab-6dd3ad5df0cd", "address": "fa:16:3e:16:9c:8c", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68bf22e3-50", "ovs_interfaceid": "68bf22e3-50d7-4692-9dab-6dd3ad5df0cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.309 2 DEBUG nova.network.os_vif_util [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:9c:8c,bridge_name='br-int',has_traffic_filtering=True,id=68bf22e3-50d7-4692-9dab-6dd3ad5df0cd,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68bf22e3-50') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.310 2 DEBUG nova.objects.instance [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f6f8aa7-8a43-4471-afed-4203d5b80b4c obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:16:57 compute-0 podman[142368]: 2025-10-08 16:16:57.464592492 +0000 UTC m=+0.067298842 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, release=1755695350, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, version=9.6, name=ubi9-minimal, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64)
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.818 2 DEBUG nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] End _get_guest_xml xml=<domain type="kvm">
Oct 08 16:16:57 compute-0 nova_compute[117413]:   <uuid>0f6f8aa7-8a43-4471-afed-4203d5b80b4c</uuid>
Oct 08 16:16:57 compute-0 nova_compute[117413]:   <name>instance-00000005</name>
Oct 08 16:16:57 compute-0 nova_compute[117413]:   <memory>131072</memory>
Oct 08 16:16:57 compute-0 nova_compute[117413]:   <vcpu>1</vcpu>
Oct 08 16:16:57 compute-0 nova_compute[117413]:   <metadata>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <nova:package version="32.1.0-0.20251008114656.23cad1d.el10"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <nova:name>tempest-TestExecuteActionsViaActuator-server-1268526964</nova:name>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <nova:creationTime>2025-10-08 16:16:57</nova:creationTime>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <nova:flavor name="m1.nano" id="43cd5d45-bd07-4889-a671-dd23291090c1">
Oct 08 16:16:57 compute-0 nova_compute[117413]:         <nova:memory>128</nova:memory>
Oct 08 16:16:57 compute-0 nova_compute[117413]:         <nova:disk>1</nova:disk>
Oct 08 16:16:57 compute-0 nova_compute[117413]:         <nova:swap>0</nova:swap>
Oct 08 16:16:57 compute-0 nova_compute[117413]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 16:16:57 compute-0 nova_compute[117413]:         <nova:vcpus>1</nova:vcpus>
Oct 08 16:16:57 compute-0 nova_compute[117413]:         <nova:extraSpecs>
Oct 08 16:16:57 compute-0 nova_compute[117413]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 08 16:16:57 compute-0 nova_compute[117413]:         </nova:extraSpecs>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       </nova:flavor>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <nova:image uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43">
Oct 08 16:16:57 compute-0 nova_compute[117413]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 08 16:16:57 compute-0 nova_compute[117413]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 08 16:16:57 compute-0 nova_compute[117413]:         <nova:minDisk>1</nova:minDisk>
Oct 08 16:16:57 compute-0 nova_compute[117413]:         <nova:minRam>0</nova:minRam>
Oct 08 16:16:57 compute-0 nova_compute[117413]:         <nova:properties>
Oct 08 16:16:57 compute-0 nova_compute[117413]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 08 16:16:57 compute-0 nova_compute[117413]:         </nova:properties>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       </nova:image>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <nova:owner>
Oct 08 16:16:57 compute-0 nova_compute[117413]:         <nova:user uuid="723962be4e3d48efb441d80077ac4263">tempest-TestExecuteActionsViaActuator-898376163-project-admin</nova:user>
Oct 08 16:16:57 compute-0 nova_compute[117413]:         <nova:project uuid="36f986860cbf4338bf6afd8aa7b4d147">tempest-TestExecuteActionsViaActuator-898376163</nova:project>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       </nova:owner>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <nova:root type="image" uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <nova:ports>
Oct 08 16:16:57 compute-0 nova_compute[117413]:         <nova:port uuid="68bf22e3-50d7-4692-9dab-6dd3ad5df0cd">
Oct 08 16:16:57 compute-0 nova_compute[117413]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:         </nova:port>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       </nova:ports>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     </nova:instance>
Oct 08 16:16:57 compute-0 nova_compute[117413]:   </metadata>
Oct 08 16:16:57 compute-0 nova_compute[117413]:   <sysinfo type="smbios">
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <system>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <entry name="manufacturer">RDO</entry>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <entry name="product">OpenStack Compute</entry>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <entry name="version">32.1.0-0.20251008114656.23cad1d.el10</entry>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <entry name="serial">0f6f8aa7-8a43-4471-afed-4203d5b80b4c</entry>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <entry name="uuid">0f6f8aa7-8a43-4471-afed-4203d5b80b4c</entry>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <entry name="family">Virtual Machine</entry>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     </system>
Oct 08 16:16:57 compute-0 nova_compute[117413]:   </sysinfo>
Oct 08 16:16:57 compute-0 nova_compute[117413]:   <os>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <boot dev="hd"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <smbios mode="sysinfo"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:   </os>
Oct 08 16:16:57 compute-0 nova_compute[117413]:   <features>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <acpi/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <apic/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <vmcoreinfo/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:   </features>
Oct 08 16:16:57 compute-0 nova_compute[117413]:   <clock offset="utc">
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <timer name="hpet" present="no"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:   </clock>
Oct 08 16:16:57 compute-0 nova_compute[117413]:   <cpu mode="host-model" match="exact">
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <topology sockets="1" cores="1" threads="1"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:16:57 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <disk type="file" device="disk">
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <target dev="vda" bus="virtio"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <disk type="file" device="cdrom">
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <driver name="qemu" type="raw" cache="none"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk.config"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <target dev="sda" bus="sata"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <interface type="ethernet">
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <mac address="fa:16:3e:16:9c:8c"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <driver name="vhost" rx_queue_size="512"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <mtu size="1442"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <target dev="tap68bf22e3-50"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     </interface>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <serial type="pty">
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/console.log" append="off"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     </serial>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <video>
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     </video>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <input type="tablet" bus="usb"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <rng model="virtio">
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <backend model="random">/dev/urandom</backend>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <controller type="usb" index="0"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 08 16:16:57 compute-0 nova_compute[117413]:       <stats period="10"/>
Oct 08 16:16:57 compute-0 nova_compute[117413]:     </memballoon>
Oct 08 16:16:57 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:16:57 compute-0 nova_compute[117413]: </domain>
Oct 08 16:16:57 compute-0 nova_compute[117413]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.818 2 DEBUG nova.compute.manager [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Preparing to wait for external event network-vif-plugged-68bf22e3-50d7-4692-9dab-6dd3ad5df0cd prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.819 2 DEBUG oslo_concurrency.lockutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.819 2 DEBUG oslo_concurrency.lockutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.819 2 DEBUG oslo_concurrency.lockutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.820 2 DEBUG nova.virt.libvirt.vif [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:16:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1268526964',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1268526964',id=5,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36f986860cbf4338bf6afd8aa7b4d147',ramdisk_id='',reservation_id='r-uc10fk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-898376163',owner_user_name='tempest-TestExecuteActionsViaActuator-898376163-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:16:53Z,user_data=None,user_id='723962be4e3d48efb441d80077ac4263',uuid=0f6f8aa7-8a43-4471-afed-4203d5b80b4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68bf22e3-50d7-4692-9dab-6dd3ad5df0cd", "address": "fa:16:3e:16:9c:8c", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68bf22e3-50", "ovs_interfaceid": "68bf22e3-50d7-4692-9dab-6dd3ad5df0cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.820 2 DEBUG nova.network.os_vif_util [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converting VIF {"id": "68bf22e3-50d7-4692-9dab-6dd3ad5df0cd", "address": "fa:16:3e:16:9c:8c", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68bf22e3-50", "ovs_interfaceid": "68bf22e3-50d7-4692-9dab-6dd3ad5df0cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.821 2 DEBUG nova.network.os_vif_util [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:9c:8c,bridge_name='br-int',has_traffic_filtering=True,id=68bf22e3-50d7-4692-9dab-6dd3ad5df0cd,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68bf22e3-50') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.821 2 DEBUG os_vif [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:9c:8c,bridge_name='br-int',has_traffic_filtering=True,id=68bf22e3-50d7-4692-9dab-6dd3ad5df0cd,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68bf22e3-50') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.822 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.822 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.823 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '1d3ce319-9264-5379-bb47-55b5e0b5b75f', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.828 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68bf22e3-50, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.828 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap68bf22e3-50, col_values=(('qos', UUID('c7f353c3-4431-42b5-8407-a3babbafa6ba')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.829 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap68bf22e3-50, col_values=(('external_ids', {'iface-id': '68bf22e3-50d7-4692-9dab-6dd3ad5df0cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:9c:8c', 'vm-uuid': '0f6f8aa7-8a43-4471-afed-4203d5b80b4c'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:57 compute-0 NetworkManager[1034]: <info>  [1759940217.8309] manager: (tap68bf22e3-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:16:57 compute-0 nova_compute[117413]: 2025-10-08 16:16:57.836 2 INFO os_vif [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:9c:8c,bridge_name='br-int',has_traffic_filtering=True,id=68bf22e3-50d7-4692-9dab-6dd3ad5df0cd,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68bf22e3-50')
Oct 08 16:16:59 compute-0 nova_compute[117413]: 2025-10-08 16:16:59.374 2 DEBUG nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:16:59 compute-0 nova_compute[117413]: 2025-10-08 16:16:59.375 2 DEBUG nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:16:59 compute-0 nova_compute[117413]: 2025-10-08 16:16:59.375 2 DEBUG nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] No VIF found with MAC fa:16:3e:16:9c:8c, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 08 16:16:59 compute-0 nova_compute[117413]: 2025-10-08 16:16:59.376 2 INFO nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Using config drive
Oct 08 16:16:59 compute-0 podman[127881]: time="2025-10-08T16:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:16:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:16:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3011 "" "Go-http-client/1.1"
Oct 08 16:16:59 compute-0 nova_compute[117413]: 2025-10-08 16:16:59.887 2 WARNING neutronclient.v2_0.client [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:17:00 compute-0 nova_compute[117413]: 2025-10-08 16:17:00.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:00 compute-0 nova_compute[117413]: 2025-10-08 16:17:00.872 2 INFO nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Creating config drive at /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk.config
Oct 08 16:17:00 compute-0 nova_compute[117413]: 2025-10-08 16:17:00.883 2 DEBUG oslo_concurrency.processutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmpe4lprufg execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:17:01 compute-0 nova_compute[117413]: 2025-10-08 16:17:01.020 2 DEBUG oslo_concurrency.processutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmpe4lprufg" returned: 0 in 0.137s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:17:01 compute-0 kernel: tap68bf22e3-50: entered promiscuous mode
Oct 08 16:17:01 compute-0 NetworkManager[1034]: <info>  [1759940221.1023] manager: (tap68bf22e3-50): new Tun device (/org/freedesktop/NetworkManager/Devices/27)
Oct 08 16:17:01 compute-0 ovn_controller[19768]: 2025-10-08T16:17:01Z|00049|binding|INFO|Claiming lport 68bf22e3-50d7-4692-9dab-6dd3ad5df0cd for this chassis.
Oct 08 16:17:01 compute-0 ovn_controller[19768]: 2025-10-08T16:17:01Z|00050|binding|INFO|68bf22e3-50d7-4692-9dab-6dd3ad5df0cd: Claiming fa:16:3e:16:9c:8c 10.100.0.9
Oct 08 16:17:01 compute-0 nova_compute[117413]: 2025-10-08 16:17:01.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:01 compute-0 nova_compute[117413]: 2025-10-08 16:17:01.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.117 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:9c:8c 10.100.0.9'], port_security=['fa:16:3e:16:9c:8c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0f6f8aa7-8a43-4471-afed-4203d5b80b4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36f986860cbf4338bf6afd8aa7b4d147', 'neutron:revision_number': '4', 'neutron:security_group_ids': '215a932b-a88a-4280-bc86-df394b56782c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c20bcb7-facc-40e4-a92a-7c3dfec236b2, chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=68bf22e3-50d7-4692-9dab-6dd3ad5df0cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.118 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 68bf22e3-50d7-4692-9dab-6dd3ad5df0cd in datapath cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b bound to our chassis
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.119 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.134 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[9692f4f1-a558-4c26-a1a3-1310dbf3726f]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.135 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcfb6ba7b-51 in ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.139 139805 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcfb6ba7b-50 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.140 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[89f20253-674c-4357-9fd0-6ed9d9c8443f]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.140 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[eab1dbf4-a81f-4996-bb9b-d279e90bbf48]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:17:01 compute-0 systemd-udevd[142411]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:17:01 compute-0 systemd-machined[77548]: New machine qemu-2-instance-00000005.
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.153 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[7500bb69-ae9f-40d9-a0f8-c40b7f924894]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:17:01 compute-0 NetworkManager[1034]: <info>  [1759940221.1590] device (tap68bf22e3-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:17:01 compute-0 NetworkManager[1034]: <info>  [1759940221.1602] device (tap68bf22e3-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:17:01 compute-0 nova_compute[117413]: 2025-10-08 16:17:01.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.163 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[9905444e-7b0e-4be6-9b5e-9566fe86e65f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:17:01 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000005.
Oct 08 16:17:01 compute-0 nova_compute[117413]: 2025-10-08 16:17:01.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:01 compute-0 ovn_controller[19768]: 2025-10-08T16:17:01Z|00051|binding|INFO|Setting lport 68bf22e3-50d7-4692-9dab-6dd3ad5df0cd ovn-installed in OVS
Oct 08 16:17:01 compute-0 ovn_controller[19768]: 2025-10-08T16:17:01Z|00052|binding|INFO|Setting lport 68bf22e3-50d7-4692-9dab-6dd3ad5df0cd up in Southbound
Oct 08 16:17:01 compute-0 nova_compute[117413]: 2025-10-08 16:17:01.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.201 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[8a35e84e-7ff7-4ac8-b6fc-88f41afb384a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:17:01 compute-0 systemd-udevd[142415]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.204 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[232cf159-423f-48ca-8dce-7f87d9eaa7e4]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:17:01 compute-0 NetworkManager[1034]: <info>  [1759940221.2063] manager: (tapcfb6ba7b-50): new Veth device (/org/freedesktop/NetworkManager/Devices/28)
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.248 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[8dab14a9-b1b6-4206-9c35-14ecb7562d52]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.253 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[8f2057b5-c2d4-4c4c-a5fc-a485f8c15214]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:17:01 compute-0 NetworkManager[1034]: <info>  [1759940221.2834] device (tapcfb6ba7b-50): carrier: link connected
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.293 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[2a2ff254-00a4-43f6-b1b4-efba0c8c0528]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.314 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[531619f3-64e7-4420-ba14-888fc8b1cae6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfb6ba7b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0b:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 150997, 'reachable_time': 30186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 142443, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.332 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e33b8d84-5bfb-4a8b-a46b-7eae93419264]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:b79'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 150997, 'tstamp': 150997}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 142444, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.353 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[dc712918-3067-41fa-ba95-cf523256211b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfb6ba7b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0b:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 150997, 'reachable_time': 30186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 142445, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:17:01 compute-0 nova_compute[117413]: 2025-10-08 16:17:01.365 2 DEBUG nova.compute.manager [req-fea4c08e-2cba-4f06-897b-204362e13c23 req-40d40d14-021d-49fb-8803-88d099c1fb4a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Received event network-vif-plugged-68bf22e3-50d7-4692-9dab-6dd3ad5df0cd external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:17:01 compute-0 nova_compute[117413]: 2025-10-08 16:17:01.366 2 DEBUG oslo_concurrency.lockutils [req-fea4c08e-2cba-4f06-897b-204362e13c23 req-40d40d14-021d-49fb-8803-88d099c1fb4a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:17:01 compute-0 nova_compute[117413]: 2025-10-08 16:17:01.366 2 DEBUG oslo_concurrency.lockutils [req-fea4c08e-2cba-4f06-897b-204362e13c23 req-40d40d14-021d-49fb-8803-88d099c1fb4a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:17:01 compute-0 nova_compute[117413]: 2025-10-08 16:17:01.366 2 DEBUG oslo_concurrency.lockutils [req-fea4c08e-2cba-4f06-897b-204362e13c23 req-40d40d14-021d-49fb-8803-88d099c1fb4a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:17:01 compute-0 nova_compute[117413]: 2025-10-08 16:17:01.366 2 DEBUG nova.compute.manager [req-fea4c08e-2cba-4f06-897b-204362e13c23 req-40d40d14-021d-49fb-8803-88d099c1fb4a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Processing event network-vif-plugged-68bf22e3-50d7-4692-9dab-6dd3ad5df0cd _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.390 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[cd4e0c56-4781-4d62-bd7e-fafe07b7b2e2]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:17:01 compute-0 openstack_network_exporter[130039]: ERROR   16:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:17:01 compute-0 openstack_network_exporter[130039]: ERROR   16:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:17:01 compute-0 openstack_network_exporter[130039]: ERROR   16:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:17:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:17:01 compute-0 openstack_network_exporter[130039]: ERROR   16:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:17:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:17:01 compute-0 openstack_network_exporter[130039]: ERROR   16:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:17:01 compute-0 nova_compute[117413]: 2025-10-08 16:17:01.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.437 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.468 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[445ab0a9-d762-4ec1-bef7-2e408efbaffb]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.469 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfb6ba7b-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.470 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.470 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfb6ba7b-50, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:17:01 compute-0 kernel: tapcfb6ba7b-50: entered promiscuous mode
Oct 08 16:17:01 compute-0 NetworkManager[1034]: <info>  [1759940221.4730] manager: (tapcfb6ba7b-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.476 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfb6ba7b-50, col_values=(('external_ids', {'iface-id': 'bc02923c-7f95-45ae-9ad1-1ed85859f940'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:17:01 compute-0 ovn_controller[19768]: 2025-10-08T16:17:01Z|00053|binding|INFO|Releasing lport bc02923c-7f95-45ae-9ad1-1ed85859f940 from this chassis (sb_readonly=0)
Oct 08 16:17:01 compute-0 nova_compute[117413]: 2025-10-08 16:17:01.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.479 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[f8819bc0-ef35-4710-af93-4a504eeeae43]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.479 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.479 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.479 28633 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.480 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.480 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[2c77f889-0318-4780-b87e-232b7d1422bd]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.480 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.481 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[71545585-2641-430a-98ec-8c46b4a90999]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.481 28633 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: global
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     log         /dev/log local0 debug
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     log-tag     haproxy-metadata-proxy-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     user        root
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     group       root
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     maxconn     1024
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     pidfile     /var/lib/neutron/external/pids/cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b.pid.haproxy
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     daemon
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: defaults
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     log global
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     mode http
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     option httplog
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     option dontlognull
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     option http-server-close
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     option forwardfor
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     retries                 3
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     timeout http-request    30s
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     timeout connect         30s
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     timeout client          32s
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     timeout server          32s
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     timeout http-keep-alive 30s
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: listen listener
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     bind 169.254.169.254:80
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     server metadata /var/lib/neutron/metadata_proxy
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:     http-request add-header X-OVN-Network-ID cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 08 16:17:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:01.481 28633 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'env', 'PROCESS_TAG=haproxy-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 08 16:17:01 compute-0 nova_compute[117413]: 2025-10-08 16:17:01.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:01 compute-0 podman[142483]: 2025-10-08 16:17:01.886851194 +0000 UTC m=+0.060790377 container create d88d09dc2e514e32b0a169f9ade5cf0ee7f09f818c7e04fa6b99a37e873b23e6 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 08 16:17:01 compute-0 systemd[1]: Started libpod-conmon-d88d09dc2e514e32b0a169f9ade5cf0ee7f09f818c7e04fa6b99a37e873b23e6.scope.
Oct 08 16:17:01 compute-0 podman[142483]: 2025-10-08 16:17:01.855536725 +0000 UTC m=+0.029475918 image pull 1b705be0a2473f9551d4f3571c1e8fc1b0bd84e013684239de53078e70a4b6e3 38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 08 16:17:01 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/104c62b0cd8a7b0f1fd4df48ed5247032f1a4708d7c7397457e218304bd75d33/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 08 16:17:01 compute-0 podman[142483]: 2025-10-08 16:17:01.972278629 +0000 UTC m=+0.146217832 container init d88d09dc2e514e32b0a169f9ade5cf0ee7f09f818c7e04fa6b99a37e873b23e6 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 08 16:17:01 compute-0 podman[142483]: 2025-10-08 16:17:01.977796686 +0000 UTC m=+0.151735859 container start d88d09dc2e514e32b0a169f9ade5cf0ee7f09f818c7e04fa6b99a37e873b23e6 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:17:02 compute-0 neutron-haproxy-ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b[142499]: [NOTICE]   (142503) : New worker (142505) forked
Oct 08 16:17:02 compute-0 neutron-haproxy-ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b[142499]: [NOTICE]   (142503) : Loading success.
Oct 08 16:17:02 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:02.045 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:17:02 compute-0 nova_compute[117413]: 2025-10-08 16:17:02.322 2 DEBUG nova.compute.manager [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 08 16:17:02 compute-0 nova_compute[117413]: 2025-10-08 16:17:02.327 2 DEBUG nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 08 16:17:02 compute-0 nova_compute[117413]: 2025-10-08 16:17:02.330 2 INFO nova.virt.libvirt.driver [-] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Instance spawned successfully.
Oct 08 16:17:02 compute-0 nova_compute[117413]: 2025-10-08 16:17:02.331 2 DEBUG nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 08 16:17:02 compute-0 nova_compute[117413]: 2025-10-08 16:17:02.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:02 compute-0 nova_compute[117413]: 2025-10-08 16:17:02.845 2 DEBUG nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:17:02 compute-0 nova_compute[117413]: 2025-10-08 16:17:02.846 2 DEBUG nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:17:02 compute-0 nova_compute[117413]: 2025-10-08 16:17:02.847 2 DEBUG nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:17:02 compute-0 nova_compute[117413]: 2025-10-08 16:17:02.848 2 DEBUG nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:17:02 compute-0 nova_compute[117413]: 2025-10-08 16:17:02.849 2 DEBUG nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:17:02 compute-0 nova_compute[117413]: 2025-10-08 16:17:02.849 2 DEBUG nova.virt.libvirt.driver [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:17:03 compute-0 nova_compute[117413]: 2025-10-08 16:17:03.362 2 INFO nova.compute.manager [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Took 8.90 seconds to spawn the instance on the hypervisor.
Oct 08 16:17:03 compute-0 nova_compute[117413]: 2025-10-08 16:17:03.363 2 DEBUG nova.compute.manager [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:17:03 compute-0 nova_compute[117413]: 2025-10-08 16:17:03.432 2 DEBUG nova.compute.manager [req-382f6e29-ae61-44da-9da1-0c7977024b90 req-5a548aeb-e766-45d7-9eff-22fabe737d34 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Received event network-vif-plugged-68bf22e3-50d7-4692-9dab-6dd3ad5df0cd external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:17:03 compute-0 nova_compute[117413]: 2025-10-08 16:17:03.433 2 DEBUG oslo_concurrency.lockutils [req-382f6e29-ae61-44da-9da1-0c7977024b90 req-5a548aeb-e766-45d7-9eff-22fabe737d34 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:17:03 compute-0 nova_compute[117413]: 2025-10-08 16:17:03.433 2 DEBUG oslo_concurrency.lockutils [req-382f6e29-ae61-44da-9da1-0c7977024b90 req-5a548aeb-e766-45d7-9eff-22fabe737d34 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:17:03 compute-0 nova_compute[117413]: 2025-10-08 16:17:03.433 2 DEBUG oslo_concurrency.lockutils [req-382f6e29-ae61-44da-9da1-0c7977024b90 req-5a548aeb-e766-45d7-9eff-22fabe737d34 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:17:03 compute-0 nova_compute[117413]: 2025-10-08 16:17:03.433 2 DEBUG nova.compute.manager [req-382f6e29-ae61-44da-9da1-0c7977024b90 req-5a548aeb-e766-45d7-9eff-22fabe737d34 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] No waiting events found dispatching network-vif-plugged-68bf22e3-50d7-4692-9dab-6dd3ad5df0cd pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:17:03 compute-0 nova_compute[117413]: 2025-10-08 16:17:03.433 2 WARNING nova.compute.manager [req-382f6e29-ae61-44da-9da1-0c7977024b90 req-5a548aeb-e766-45d7-9eff-22fabe737d34 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Received unexpected event network-vif-plugged-68bf22e3-50d7-4692-9dab-6dd3ad5df0cd for instance with vm_state active and task_state None.
Oct 08 16:17:03 compute-0 podman[142515]: 2025-10-08 16:17:03.476008428 +0000 UTC m=+0.069999978 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.4, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 08 16:17:03 compute-0 nova_compute[117413]: 2025-10-08 16:17:03.896 2 INFO nova.compute.manager [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Took 14.13 seconds to build instance.
Oct 08 16:17:04 compute-0 nova_compute[117413]: 2025-10-08 16:17:04.402 2 DEBUG oslo_concurrency.lockutils [None req-76607ea7-b0d2-4272-b86d-97e5b77a165d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.649s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:17:05 compute-0 nova_compute[117413]: 2025-10-08 16:17:05.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:07 compute-0 nova_compute[117413]: 2025-10-08 16:17:07.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:08 compute-0 podman[142536]: 2025-10-08 16:17:08.448803578 +0000 UTC m=+0.057881654 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 08 16:17:10 compute-0 nova_compute[117413]: 2025-10-08 16:17:10.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:11 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:11.046 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:17:12 compute-0 nova_compute[117413]: 2025-10-08 16:17:12.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:13 compute-0 podman[142566]: 2025-10-08 16:17:13.450093288 +0000 UTC m=+0.060180429 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:17:13 compute-0 podman[142567]: 2025-10-08 16:17:13.484477494 +0000 UTC m=+0.088662158 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251007, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 08 16:17:15 compute-0 ovn_controller[19768]: 2025-10-08T16:17:15Z|00003|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:16:9c:8c 10.100.0.9
Oct 08 16:17:15 compute-0 ovn_controller[19768]: 2025-10-08T16:17:15Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:16:9c:8c 10.100.0.9
Oct 08 16:17:15 compute-0 nova_compute[117413]: 2025-10-08 16:17:15.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:17 compute-0 nova_compute[117413]: 2025-10-08 16:17:17.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:20 compute-0 nova_compute[117413]: 2025-10-08 16:17:20.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:22 compute-0 nova_compute[117413]: 2025-10-08 16:17:22.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:25 compute-0 podman[142617]: 2025-10-08 16:17:25.445069571 +0000 UTC m=+0.054937681 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Oct 08 16:17:25 compute-0 nova_compute[117413]: 2025-10-08 16:17:25.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:27 compute-0 nova_compute[117413]: 2025-10-08 16:17:27.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:27 compute-0 podman[142638]: 2025-10-08 16:17:27.938053904 +0000 UTC m=+0.063207915 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., vcs-type=git, distribution-scope=public, name=ubi9-minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1755695350, config_id=edpm)
Oct 08 16:17:28 compute-0 irqbalance[843]: Cannot change IRQ 32 affinity: Operation not permitted
Oct 08 16:17:28 compute-0 irqbalance[843]: IRQ 32 affinity is now unmanaged
Oct 08 16:17:29 compute-0 podman[127881]: time="2025-10-08T16:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:17:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:17:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3485 "" "Go-http-client/1.1"
Oct 08 16:17:30 compute-0 nova_compute[117413]: 2025-10-08 16:17:30.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:31 compute-0 openstack_network_exporter[130039]: ERROR   16:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:17:31 compute-0 openstack_network_exporter[130039]: ERROR   16:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:17:31 compute-0 openstack_network_exporter[130039]: ERROR   16:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:17:31 compute-0 openstack_network_exporter[130039]: ERROR   16:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:17:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:17:31 compute-0 openstack_network_exporter[130039]: ERROR   16:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:17:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:17:32 compute-0 nova_compute[117413]: 2025-10-08 16:17:32.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:34 compute-0 podman[142659]: 2025-10-08 16:17:34.463575946 +0000 UTC m=+0.062726052 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, tcib_build_tag=watcher_latest)
Oct 08 16:17:35 compute-0 nova_compute[117413]: 2025-10-08 16:17:35.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:37 compute-0 nova_compute[117413]: 2025-10-08 16:17:37.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:39 compute-0 podman[142679]: 2025-10-08 16:17:39.481196069 +0000 UTC m=+0.077874581 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 08 16:17:40 compute-0 nova_compute[117413]: 2025-10-08 16:17:40.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:41 compute-0 sshd-session[142700]: banner exchange: Connection from 128.203.200.175 port 39896: invalid format
Oct 08 16:17:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:41.886 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:17:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:41.886 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:17:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:17:41.887 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:17:42 compute-0 nova_compute[117413]: 2025-10-08 16:17:42.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:44 compute-0 podman[142702]: 2025-10-08 16:17:44.498483873 +0000 UTC m=+0.083438980 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:17:44 compute-0 podman[142703]: 2025-10-08 16:17:44.544912001 +0000 UTC m=+0.124065293 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 08 16:17:45 compute-0 nova_compute[117413]: 2025-10-08 16:17:45.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:47 compute-0 nova_compute[117413]: 2025-10-08 16:17:47.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:50 compute-0 sshd-session[142698]: Connection closed by 128.203.200.175 port 39884 [preauth]
Oct 08 16:17:50 compute-0 nova_compute[117413]: 2025-10-08 16:17:50.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:52 compute-0 nova_compute[117413]: 2025-10-08 16:17:52.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:54 compute-0 nova_compute[117413]: 2025-10-08 16:17:54.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:17:54 compute-0 nova_compute[117413]: 2025-10-08 16:17:54.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:17:54 compute-0 nova_compute[117413]: 2025-10-08 16:17:54.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:17:54 compute-0 nova_compute[117413]: 2025-10-08 16:17:54.878 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:17:54 compute-0 nova_compute[117413]: 2025-10-08 16:17:54.879 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:17:54 compute-0 nova_compute[117413]: 2025-10-08 16:17:54.880 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:17:54 compute-0 nova_compute[117413]: 2025-10-08 16:17:54.880 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:17:55 compute-0 nova_compute[117413]: 2025-10-08 16:17:55.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:55 compute-0 nova_compute[117413]: 2025-10-08 16:17:55.925 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:17:55 compute-0 nova_compute[117413]: 2025-10-08 16:17:55.994 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:17:55 compute-0 nova_compute[117413]: 2025-10-08 16:17:55.996 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:17:56 compute-0 nova_compute[117413]: 2025-10-08 16:17:56.056 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:17:56 compute-0 nova_compute[117413]: 2025-10-08 16:17:56.224 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:17:56 compute-0 nova_compute[117413]: 2025-10-08 16:17:56.226 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:17:56 compute-0 nova_compute[117413]: 2025-10-08 16:17:56.249 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.023s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:17:56 compute-0 nova_compute[117413]: 2025-10-08 16:17:56.250 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5990MB free_disk=73.24203491210938GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:17:56 compute-0 nova_compute[117413]: 2025-10-08 16:17:56.250 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:17:56 compute-0 nova_compute[117413]: 2025-10-08 16:17:56.251 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:17:56 compute-0 podman[142761]: 2025-10-08 16:17:56.480443604 +0000 UTC m=+0.070624746 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 16:17:57 compute-0 nova_compute[117413]: 2025-10-08 16:17:57.296 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance 0f6f8aa7-8a43-4471-afed-4203d5b80b4c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:17:57 compute-0 nova_compute[117413]: 2025-10-08 16:17:57.297 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:17:57 compute-0 nova_compute[117413]: 2025-10-08 16:17:57.297 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:17:56 up 26 min,  0 user,  load average: 0.23, 0.19, 0.27\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_36f986860cbf4338bf6afd8aa7b4d147': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:17:57 compute-0 nova_compute[117413]: 2025-10-08 16:17:57.327 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:17:57 compute-0 nova_compute[117413]: 2025-10-08 16:17:57.835 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:17:57 compute-0 nova_compute[117413]: 2025-10-08 16:17:57.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:17:58 compute-0 nova_compute[117413]: 2025-10-08 16:17:58.347 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:17:58 compute-0 nova_compute[117413]: 2025-10-08 16:17:58.348 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.097s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:17:58 compute-0 podman[142781]: 2025-10-08 16:17:58.486785457 +0000 UTC m=+0.076681345 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, config_id=edpm, com.redhat.component=ubi9-minimal-container)
Oct 08 16:17:59 compute-0 nova_compute[117413]: 2025-10-08 16:17:59.344 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:17:59 compute-0 nova_compute[117413]: 2025-10-08 16:17:59.344 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:17:59 compute-0 nova_compute[117413]: 2025-10-08 16:17:59.345 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:17:59 compute-0 nova_compute[117413]: 2025-10-08 16:17:59.345 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:17:59 compute-0 nova_compute[117413]: 2025-10-08 16:17:59.346 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:17:59 compute-0 nova_compute[117413]: 2025-10-08 16:17:59.347 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:17:59 compute-0 podman[127881]: time="2025-10-08T16:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:17:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:17:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3489 "" "Go-http-client/1.1"
Oct 08 16:18:00 compute-0 nova_compute[117413]: 2025-10-08 16:18:00.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:01 compute-0 nova_compute[117413]: 2025-10-08 16:18:01.360 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:18:01 compute-0 openstack_network_exporter[130039]: ERROR   16:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:18:01 compute-0 openstack_network_exporter[130039]: ERROR   16:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:18:01 compute-0 openstack_network_exporter[130039]: ERROR   16:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:18:01 compute-0 openstack_network_exporter[130039]: ERROR   16:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:18:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:18:01 compute-0 openstack_network_exporter[130039]: ERROR   16:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:18:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:18:02 compute-0 nova_compute[117413]: 2025-10-08 16:18:02.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:05 compute-0 podman[142804]: 2025-10-08 16:18:05.451667365 +0000 UTC m=+0.058644442 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 08 16:18:05 compute-0 nova_compute[117413]: 2025-10-08 16:18:05.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:07 compute-0 nova_compute[117413]: 2025-10-08 16:18:07.123 2 DEBUG oslo_concurrency.lockutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:18:07 compute-0 nova_compute[117413]: 2025-10-08 16:18:07.123 2 DEBUG oslo_concurrency.lockutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:18:07 compute-0 nova_compute[117413]: 2025-10-08 16:18:07.628 2 DEBUG nova.compute.manager [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 08 16:18:07 compute-0 nova_compute[117413]: 2025-10-08 16:18:07.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:08 compute-0 nova_compute[117413]: 2025-10-08 16:18:08.177 2 DEBUG oslo_concurrency.lockutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:18:08 compute-0 nova_compute[117413]: 2025-10-08 16:18:08.177 2 DEBUG oslo_concurrency.lockutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:18:08 compute-0 nova_compute[117413]: 2025-10-08 16:18:08.185 2 DEBUG nova.virt.hardware [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 08 16:18:08 compute-0 nova_compute[117413]: 2025-10-08 16:18:08.185 2 INFO nova.compute.claims [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Claim successful on node compute-0.ctlplane.example.com
Oct 08 16:18:09 compute-0 nova_compute[117413]: 2025-10-08 16:18:09.248 2 DEBUG nova.compute.provider_tree [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:18:09 compute-0 nova_compute[117413]: 2025-10-08 16:18:09.756 2 DEBUG nova.scheduler.client.report [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:18:10 compute-0 nova_compute[117413]: 2025-10-08 16:18:10.265 2 DEBUG oslo_concurrency.lockutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.088s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:18:10 compute-0 nova_compute[117413]: 2025-10-08 16:18:10.266 2 DEBUG nova.compute.manager [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 08 16:18:10 compute-0 podman[142824]: 2025-10-08 16:18:10.451305661 +0000 UTC m=+0.061788481 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 08 16:18:10 compute-0 nova_compute[117413]: 2025-10-08 16:18:10.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:10 compute-0 nova_compute[117413]: 2025-10-08 16:18:10.776 2 DEBUG nova.compute.manager [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 08 16:18:10 compute-0 nova_compute[117413]: 2025-10-08 16:18:10.777 2 DEBUG nova.network.neutron [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 08 16:18:10 compute-0 nova_compute[117413]: 2025-10-08 16:18:10.777 2 WARNING neutronclient.v2_0.client [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:18:10 compute-0 nova_compute[117413]: 2025-10-08 16:18:10.777 2 WARNING neutronclient.v2_0.client [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:18:11 compute-0 nova_compute[117413]: 2025-10-08 16:18:11.284 2 INFO nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 08 16:18:11 compute-0 nova_compute[117413]: 2025-10-08 16:18:11.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:11 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:18:11.731 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:18:11 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:18:11.732 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:18:11 compute-0 nova_compute[117413]: 2025-10-08 16:18:11.792 2 DEBUG nova.compute.manager [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 08 16:18:11 compute-0 nova_compute[117413]: 2025-10-08 16:18:11.871 2 DEBUG nova.network.neutron [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Successfully created port: 8ec197ec-6e84-4cdb-8907-b92c269e3285 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 08 16:18:12 compute-0 nova_compute[117413]: 2025-10-08 16:18:12.814 2 DEBUG nova.compute.manager [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 08 16:18:12 compute-0 nova_compute[117413]: 2025-10-08 16:18:12.816 2 DEBUG nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 08 16:18:12 compute-0 nova_compute[117413]: 2025-10-08 16:18:12.816 2 INFO nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Creating image(s)
Oct 08 16:18:12 compute-0 nova_compute[117413]: 2025-10-08 16:18:12.817 2 DEBUG oslo_concurrency.lockutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "/var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:18:12 compute-0 nova_compute[117413]: 2025-10-08 16:18:12.818 2 DEBUG oslo_concurrency.lockutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "/var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:18:12 compute-0 nova_compute[117413]: 2025-10-08 16:18:12.819 2 DEBUG oslo_concurrency.lockutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "/var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:18:12 compute-0 nova_compute[117413]: 2025-10-08 16:18:12.820 2 DEBUG oslo_utils.imageutils.format_inspector [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:18:12 compute-0 nova_compute[117413]: 2025-10-08 16:18:12.827 2 DEBUG oslo_utils.imageutils.format_inspector [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:18:12 compute-0 nova_compute[117413]: 2025-10-08 16:18:12.829 2 DEBUG oslo_concurrency.processutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:18:12 compute-0 nova_compute[117413]: 2025-10-08 16:18:12.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:12 compute-0 nova_compute[117413]: 2025-10-08 16:18:12.886 2 DEBUG oslo_concurrency.processutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:18:12 compute-0 nova_compute[117413]: 2025-10-08 16:18:12.887 2 DEBUG oslo_concurrency.lockutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:18:12 compute-0 nova_compute[117413]: 2025-10-08 16:18:12.887 2 DEBUG oslo_concurrency.lockutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:18:12 compute-0 nova_compute[117413]: 2025-10-08 16:18:12.888 2 DEBUG oslo_utils.imageutils.format_inspector [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:18:12 compute-0 nova_compute[117413]: 2025-10-08 16:18:12.892 2 DEBUG oslo_utils.imageutils.format_inspector [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:18:12 compute-0 nova_compute[117413]: 2025-10-08 16:18:12.892 2 DEBUG oslo_concurrency.processutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:18:12 compute-0 nova_compute[117413]: 2025-10-08 16:18:12.975 2 DEBUG oslo_concurrency.processutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:18:12 compute-0 nova_compute[117413]: 2025-10-08 16:18:12.977 2 DEBUG oslo_concurrency.processutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.036 2 DEBUG oslo_concurrency.processutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk 1073741824" returned: 0 in 0.059s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.038 2 DEBUG oslo_concurrency.lockutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.151s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.039 2 DEBUG oslo_concurrency.processutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.123 2 DEBUG nova.network.neutron [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Successfully updated port: 8ec197ec-6e84-4cdb-8907-b92c269e3285 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.127 2 DEBUG oslo_concurrency.processutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.128 2 DEBUG nova.virt.disk.api [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Checking if we can resize image /var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.129 2 DEBUG oslo_concurrency.processutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.195 2 DEBUG nova.compute.manager [req-dfed8ebe-c14c-4f9a-a646-11d9b723945b req-3767e507-ef9c-4903-bc4b-0114253bd4b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Received event network-changed-8ec197ec-6e84-4cdb-8907-b92c269e3285 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.196 2 DEBUG nova.compute.manager [req-dfed8ebe-c14c-4f9a-a646-11d9b723945b req-3767e507-ef9c-4903-bc4b-0114253bd4b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Refreshing instance network info cache due to event network-changed-8ec197ec-6e84-4cdb-8907-b92c269e3285. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.197 2 DEBUG oslo_concurrency.lockutils [req-dfed8ebe-c14c-4f9a-a646-11d9b723945b req-3767e507-ef9c-4903-bc4b-0114253bd4b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-a11dbe8f-56c0-469a-91dc-1f2104aedd13" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.198 2 DEBUG oslo_concurrency.lockutils [req-dfed8ebe-c14c-4f9a-a646-11d9b723945b req-3767e507-ef9c-4903-bc4b-0114253bd4b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-a11dbe8f-56c0-469a-91dc-1f2104aedd13" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.198 2 DEBUG nova.network.neutron [req-dfed8ebe-c14c-4f9a-a646-11d9b723945b req-3767e507-ef9c-4903-bc4b-0114253bd4b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Refreshing network info cache for port 8ec197ec-6e84-4cdb-8907-b92c269e3285 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.201 2 DEBUG oslo_concurrency.processutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.202 2 DEBUG nova.virt.disk.api [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Cannot resize image /var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.202 2 DEBUG nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.203 2 DEBUG nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Ensure instance console log exists: /var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.203 2 DEBUG oslo_concurrency.lockutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.204 2 DEBUG oslo_concurrency.lockutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.204 2 DEBUG oslo_concurrency.lockutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.642 2 DEBUG oslo_concurrency.lockutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "refresh_cache-a11dbe8f-56c0-469a-91dc-1f2104aedd13" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.706 2 WARNING neutronclient.v2_0.client [req-dfed8ebe-c14c-4f9a-a646-11d9b723945b req-3767e507-ef9c-4903-bc4b-0114253bd4b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.834 2 DEBUG nova.network.neutron [req-dfed8ebe-c14c-4f9a-a646-11d9b723945b req-3767e507-ef9c-4903-bc4b-0114253bd4b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:18:13 compute-0 nova_compute[117413]: 2025-10-08 16:18:13.970 2 DEBUG nova.network.neutron [req-dfed8ebe-c14c-4f9a-a646-11d9b723945b req-3767e507-ef9c-4903-bc4b-0114253bd4b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:18:14 compute-0 nova_compute[117413]: 2025-10-08 16:18:14.477 2 DEBUG oslo_concurrency.lockutils [req-dfed8ebe-c14c-4f9a-a646-11d9b723945b req-3767e507-ef9c-4903-bc4b-0114253bd4b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-a11dbe8f-56c0-469a-91dc-1f2104aedd13" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:18:14 compute-0 nova_compute[117413]: 2025-10-08 16:18:14.478 2 DEBUG oslo_concurrency.lockutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquired lock "refresh_cache-a11dbe8f-56c0-469a-91dc-1f2104aedd13" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:18:14 compute-0 nova_compute[117413]: 2025-10-08 16:18:14.479 2 DEBUG nova.network.neutron [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:18:15 compute-0 podman[142860]: 2025-10-08 16:18:15.447169709 +0000 UTC m=+0.056731667 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 08 16:18:15 compute-0 podman[142861]: 2025-10-08 16:18:15.48618145 +0000 UTC m=+0.092924147 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 08 16:18:15 compute-0 nova_compute[117413]: 2025-10-08 16:18:15.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:15 compute-0 nova_compute[117413]: 2025-10-08 16:18:15.837 2 DEBUG nova.network.neutron [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:18:16 compute-0 nova_compute[117413]: 2025-10-08 16:18:16.038 2 WARNING neutronclient.v2_0.client [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:18:16 compute-0 nova_compute[117413]: 2025-10-08 16:18:16.555 2 DEBUG nova.network.neutron [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Updating instance_info_cache with network_info: [{"id": "8ec197ec-6e84-4cdb-8907-b92c269e3285", "address": "fa:16:3e:93:50:fa", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec197ec-6e", "ovs_interfaceid": "8ec197ec-6e84-4cdb-8907-b92c269e3285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.061 2 DEBUG oslo_concurrency.lockutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Releasing lock "refresh_cache-a11dbe8f-56c0-469a-91dc-1f2104aedd13" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.062 2 DEBUG nova.compute.manager [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Instance network_info: |[{"id": "8ec197ec-6e84-4cdb-8907-b92c269e3285", "address": "fa:16:3e:93:50:fa", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec197ec-6e", "ovs_interfaceid": "8ec197ec-6e84-4cdb-8907-b92c269e3285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.066 2 DEBUG nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Start _get_guest_xml network_info=[{"id": "8ec197ec-6e84-4cdb-8907-b92c269e3285", "address": "fa:16:3e:93:50:fa", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec197ec-6e", "ovs_interfaceid": "8ec197ec-6e84-4cdb-8907-b92c269e3285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'boot_index': 0, 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': '44390e9d-4b05-4916-9ba9-97b19c79ef43'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.072 2 WARNING nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.075 2 DEBUG nova.virt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='44390e9d-4b05-4916-9ba9-97b19c79ef43', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteActionsViaActuator-server-284191439', uuid='a11dbe8f-56c0-469a-91dc-1f2104aedd13'), owner=OwnerMeta(userid='723962be4e3d48efb441d80077ac4263', username='tempest-TestExecuteActionsViaActuator-898376163-project-admin', projectid='36f986860cbf4338bf6afd8aa7b4d147', projectname='tempest-TestExecuteActionsViaActuator-898376163'), image=ImageMeta(id='44390e9d-4b05-4916-9ba9-97b19c79ef43', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='43cd5d45-bd07-4889-a671-dd23291090c1', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "8ec197ec-6e84-4cdb-8907-b92c269e3285", "address": "fa:16:3e:93:50:fa", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec197ec-6e", "ovs_interfaceid": "8ec197ec-6e84-4cdb-8907-b92c269e3285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008114656.23cad1d.el10', creation_time=1759940297.075111) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.080 2 DEBUG nova.virt.libvirt.host [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.081 2 DEBUG nova.virt.libvirt.host [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.084 2 DEBUG nova.virt.libvirt.host [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.085 2 DEBUG nova.virt.libvirt.host [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.086 2 DEBUG nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.086 2 DEBUG nova.virt.hardware [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-08T16:08:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='43cd5d45-bd07-4889-a671-dd23291090c1',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.087 2 DEBUG nova.virt.hardware [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.088 2 DEBUG nova.virt.hardware [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.088 2 DEBUG nova.virt.hardware [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.089 2 DEBUG nova.virt.hardware [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.089 2 DEBUG nova.virt.hardware [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.090 2 DEBUG nova.virt.hardware [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.090 2 DEBUG nova.virt.hardware [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.091 2 DEBUG nova.virt.hardware [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.091 2 DEBUG nova.virt.hardware [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.092 2 DEBUG nova.virt.hardware [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.098 2 DEBUG nova.virt.libvirt.vif [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:18:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-284191439',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-284191439',id=7,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36f986860cbf4338bf6afd8aa7b4d147',ramdisk_id='',reservation_id='r-51qp54u6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-898376163',owner_user_name='tempest-TestExecuteActionsViaActuator-898376163-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:18:11Z,user_data=None,user_id='723962be4e3d48efb441d80077ac4263',uuid=a11dbe8f-56c0-469a-91dc-1f2104aedd13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ec197ec-6e84-4cdb-8907-b92c269e3285", "address": "fa:16:3e:93:50:fa", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec197ec-6e", "ovs_interfaceid": "8ec197ec-6e84-4cdb-8907-b92c269e3285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.099 2 DEBUG nova.network.os_vif_util [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converting VIF {"id": "8ec197ec-6e84-4cdb-8907-b92c269e3285", "address": "fa:16:3e:93:50:fa", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec197ec-6e", "ovs_interfaceid": "8ec197ec-6e84-4cdb-8907-b92c269e3285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.100 2 DEBUG nova.network.os_vif_util [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:50:fa,bridge_name='br-int',has_traffic_filtering=True,id=8ec197ec-6e84-4cdb-8907-b92c269e3285,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec197ec-6e') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.102 2 DEBUG nova.objects.instance [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lazy-loading 'pci_devices' on Instance uuid a11dbe8f-56c0-469a-91dc-1f2104aedd13 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.610 2 DEBUG nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] End _get_guest_xml xml=<domain type="kvm">
Oct 08 16:18:17 compute-0 nova_compute[117413]:   <uuid>a11dbe8f-56c0-469a-91dc-1f2104aedd13</uuid>
Oct 08 16:18:17 compute-0 nova_compute[117413]:   <name>instance-00000007</name>
Oct 08 16:18:17 compute-0 nova_compute[117413]:   <memory>131072</memory>
Oct 08 16:18:17 compute-0 nova_compute[117413]:   <vcpu>1</vcpu>
Oct 08 16:18:17 compute-0 nova_compute[117413]:   <metadata>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <nova:package version="32.1.0-0.20251008114656.23cad1d.el10"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <nova:name>tempest-TestExecuteActionsViaActuator-server-284191439</nova:name>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <nova:creationTime>2025-10-08 16:18:17</nova:creationTime>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <nova:flavor name="m1.nano" id="43cd5d45-bd07-4889-a671-dd23291090c1">
Oct 08 16:18:17 compute-0 nova_compute[117413]:         <nova:memory>128</nova:memory>
Oct 08 16:18:17 compute-0 nova_compute[117413]:         <nova:disk>1</nova:disk>
Oct 08 16:18:17 compute-0 nova_compute[117413]:         <nova:swap>0</nova:swap>
Oct 08 16:18:17 compute-0 nova_compute[117413]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 16:18:17 compute-0 nova_compute[117413]:         <nova:vcpus>1</nova:vcpus>
Oct 08 16:18:17 compute-0 nova_compute[117413]:         <nova:extraSpecs>
Oct 08 16:18:17 compute-0 nova_compute[117413]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 08 16:18:17 compute-0 nova_compute[117413]:         </nova:extraSpecs>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       </nova:flavor>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <nova:image uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43">
Oct 08 16:18:17 compute-0 nova_compute[117413]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 08 16:18:17 compute-0 nova_compute[117413]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 08 16:18:17 compute-0 nova_compute[117413]:         <nova:minDisk>1</nova:minDisk>
Oct 08 16:18:17 compute-0 nova_compute[117413]:         <nova:minRam>0</nova:minRam>
Oct 08 16:18:17 compute-0 nova_compute[117413]:         <nova:properties>
Oct 08 16:18:17 compute-0 nova_compute[117413]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 08 16:18:17 compute-0 nova_compute[117413]:         </nova:properties>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       </nova:image>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <nova:owner>
Oct 08 16:18:17 compute-0 nova_compute[117413]:         <nova:user uuid="723962be4e3d48efb441d80077ac4263">tempest-TestExecuteActionsViaActuator-898376163-project-admin</nova:user>
Oct 08 16:18:17 compute-0 nova_compute[117413]:         <nova:project uuid="36f986860cbf4338bf6afd8aa7b4d147">tempest-TestExecuteActionsViaActuator-898376163</nova:project>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       </nova:owner>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <nova:root type="image" uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <nova:ports>
Oct 08 16:18:17 compute-0 nova_compute[117413]:         <nova:port uuid="8ec197ec-6e84-4cdb-8907-b92c269e3285">
Oct 08 16:18:17 compute-0 nova_compute[117413]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:         </nova:port>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       </nova:ports>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     </nova:instance>
Oct 08 16:18:17 compute-0 nova_compute[117413]:   </metadata>
Oct 08 16:18:17 compute-0 nova_compute[117413]:   <sysinfo type="smbios">
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <system>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <entry name="manufacturer">RDO</entry>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <entry name="product">OpenStack Compute</entry>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <entry name="version">32.1.0-0.20251008114656.23cad1d.el10</entry>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <entry name="serial">a11dbe8f-56c0-469a-91dc-1f2104aedd13</entry>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <entry name="uuid">a11dbe8f-56c0-469a-91dc-1f2104aedd13</entry>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <entry name="family">Virtual Machine</entry>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     </system>
Oct 08 16:18:17 compute-0 nova_compute[117413]:   </sysinfo>
Oct 08 16:18:17 compute-0 nova_compute[117413]:   <os>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <boot dev="hd"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <smbios mode="sysinfo"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:   </os>
Oct 08 16:18:17 compute-0 nova_compute[117413]:   <features>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <acpi/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <apic/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <vmcoreinfo/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:   </features>
Oct 08 16:18:17 compute-0 nova_compute[117413]:   <clock offset="utc">
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <timer name="hpet" present="no"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:   </clock>
Oct 08 16:18:17 compute-0 nova_compute[117413]:   <cpu mode="host-model" match="exact">
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <topology sockets="1" cores="1" threads="1"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:18:17 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <disk type="file" device="disk">
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <target dev="vda" bus="virtio"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <disk type="file" device="cdrom">
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <driver name="qemu" type="raw" cache="none"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk.config"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <target dev="sda" bus="sata"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <interface type="ethernet">
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <mac address="fa:16:3e:93:50:fa"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <driver name="vhost" rx_queue_size="512"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <mtu size="1442"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <target dev="tap8ec197ec-6e"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     </interface>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <serial type="pty">
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/console.log" append="off"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     </serial>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <video>
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     </video>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <input type="tablet" bus="usb"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <rng model="virtio">
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <backend model="random">/dev/urandom</backend>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <controller type="usb" index="0"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 08 16:18:17 compute-0 nova_compute[117413]:       <stats period="10"/>
Oct 08 16:18:17 compute-0 nova_compute[117413]:     </memballoon>
Oct 08 16:18:17 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:18:17 compute-0 nova_compute[117413]: </domain>
Oct 08 16:18:17 compute-0 nova_compute[117413]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.612 2 DEBUG nova.compute.manager [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Preparing to wait for external event network-vif-plugged-8ec197ec-6e84-4cdb-8907-b92c269e3285 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.613 2 DEBUG oslo_concurrency.lockutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.613 2 DEBUG oslo_concurrency.lockutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.613 2 DEBUG oslo_concurrency.lockutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.614 2 DEBUG nova.virt.libvirt.vif [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:18:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-284191439',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-284191439',id=7,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36f986860cbf4338bf6afd8aa7b4d147',ramdisk_id='',reservation_id='r-51qp54u6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-898376163',owner_user_name='tempest-TestExecuteActionsViaActuator-898376163-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:18:11Z,user_data=None,user_id='723962be4e3d48efb441d80077ac4263',uuid=a11dbe8f-56c0-469a-91dc-1f2104aedd13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ec197ec-6e84-4cdb-8907-b92c269e3285", "address": "fa:16:3e:93:50:fa", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec197ec-6e", "ovs_interfaceid": "8ec197ec-6e84-4cdb-8907-b92c269e3285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.614 2 DEBUG nova.network.os_vif_util [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converting VIF {"id": "8ec197ec-6e84-4cdb-8907-b92c269e3285", "address": "fa:16:3e:93:50:fa", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec197ec-6e", "ovs_interfaceid": "8ec197ec-6e84-4cdb-8907-b92c269e3285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.615 2 DEBUG nova.network.os_vif_util [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:50:fa,bridge_name='br-int',has_traffic_filtering=True,id=8ec197ec-6e84-4cdb-8907-b92c269e3285,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec197ec-6e') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.615 2 DEBUG os_vif [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:50:fa,bridge_name='br-int',has_traffic_filtering=True,id=8ec197ec-6e84-4cdb-8907-b92c269e3285,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec197ec-6e') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.617 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'c174d6ca-1657-5d01-8ed1-a107ab4c67ba', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.622 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ec197ec-6e, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.622 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap8ec197ec-6e, col_values=(('qos', UUID('0c0cfb76-46a3-4b87-aad5-5e9db60e8f89')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.622 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap8ec197ec-6e, col_values=(('external_ids', {'iface-id': '8ec197ec-6e84-4cdb-8907-b92c269e3285', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:50:fa', 'vm-uuid': 'a11dbe8f-56c0-469a-91dc-1f2104aedd13'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:17 compute-0 NetworkManager[1034]: <info>  [1759940297.6245] manager: (tap8ec197ec-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:17 compute-0 nova_compute[117413]: 2025-10-08 16:18:17.636 2 INFO os_vif [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:50:fa,bridge_name='br-int',has_traffic_filtering=True,id=8ec197ec-6e84-4cdb-8907-b92c269e3285,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec197ec-6e')
Oct 08 16:18:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:18:18.733 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:18:19 compute-0 nova_compute[117413]: 2025-10-08 16:18:19.179 2 DEBUG nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:18:19 compute-0 nova_compute[117413]: 2025-10-08 16:18:19.180 2 DEBUG nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:18:19 compute-0 nova_compute[117413]: 2025-10-08 16:18:19.180 2 DEBUG nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] No VIF found with MAC fa:16:3e:93:50:fa, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 08 16:18:19 compute-0 nova_compute[117413]: 2025-10-08 16:18:19.181 2 INFO nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Using config drive
Oct 08 16:18:19 compute-0 nova_compute[117413]: 2025-10-08 16:18:19.694 2 WARNING neutronclient.v2_0.client [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:18:19 compute-0 nova_compute[117413]: 2025-10-08 16:18:19.920 2 INFO nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Creating config drive at /var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk.config
Oct 08 16:18:19 compute-0 nova_compute[117413]: 2025-10-08 16:18:19.926 2 DEBUG oslo_concurrency.processutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmp_hay7p49 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:18:20 compute-0 nova_compute[117413]: 2025-10-08 16:18:20.059 2 DEBUG oslo_concurrency.processutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmp_hay7p49" returned: 0 in 0.134s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:18:20 compute-0 kernel: tap8ec197ec-6e: entered promiscuous mode
Oct 08 16:18:20 compute-0 NetworkManager[1034]: <info>  [1759940300.1500] manager: (tap8ec197ec-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/31)
Oct 08 16:18:20 compute-0 ovn_controller[19768]: 2025-10-08T16:18:20Z|00054|binding|INFO|Claiming lport 8ec197ec-6e84-4cdb-8907-b92c269e3285 for this chassis.
Oct 08 16:18:20 compute-0 nova_compute[117413]: 2025-10-08 16:18:20.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:20 compute-0 ovn_controller[19768]: 2025-10-08T16:18:20Z|00055|binding|INFO|8ec197ec-6e84-4cdb-8907-b92c269e3285: Claiming fa:16:3e:93:50:fa 10.100.0.3
Oct 08 16:18:20 compute-0 ovn_controller[19768]: 2025-10-08T16:18:20Z|00056|binding|INFO|Setting lport 8ec197ec-6e84-4cdb-8907-b92c269e3285 ovn-installed in OVS
Oct 08 16:18:20 compute-0 nova_compute[117413]: 2025-10-08 16:18:20.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:20 compute-0 nova_compute[117413]: 2025-10-08 16:18:20.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:20 compute-0 ovn_controller[19768]: 2025-10-08T16:18:20Z|00057|binding|INFO|Setting lport 8ec197ec-6e84-4cdb-8907-b92c269e3285 up in Southbound
Oct 08 16:18:20 compute-0 systemd-udevd[142927]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:18:20 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:18:20.191 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:50:fa 10.100.0.3'], port_security=['fa:16:3e:93:50:fa 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'a11dbe8f-56c0-469a-91dc-1f2104aedd13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36f986860cbf4338bf6afd8aa7b4d147', 'neutron:revision_number': '4', 'neutron:security_group_ids': '215a932b-a88a-4280-bc86-df394b56782c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c20bcb7-facc-40e4-a92a-7c3dfec236b2, chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=8ec197ec-6e84-4cdb-8907-b92c269e3285) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:18:20 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:18:20.192 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec197ec-6e84-4cdb-8907-b92c269e3285 in datapath cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b bound to our chassis
Oct 08 16:18:20 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:18:20.195 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b
Oct 08 16:18:20 compute-0 NetworkManager[1034]: <info>  [1759940300.2164] device (tap8ec197ec-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:18:20 compute-0 NetworkManager[1034]: <info>  [1759940300.2187] device (tap8ec197ec-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:18:20 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:18:20.220 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e032c1c7-6627-4beb-8e9d-76de18f27fc5]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:18:20 compute-0 systemd-machined[77548]: New machine qemu-3-instance-00000007.
Oct 08 16:18:20 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000007.
Oct 08 16:18:20 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:18:20.267 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[22291a90-b921-4baa-a43e-daee384d40e6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:18:20 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:18:20.271 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[2c6e7fdf-5f38-43f7-aeea-9cd3dd68e546]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:18:20 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:18:20.315 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[4450a79b-4079-4405-afe4-cf8a5966b58d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:18:20 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:18:20.344 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[27a76179-cdaa-47bc-a5f9-3207ea5f8e4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfb6ba7b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0b:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 150997, 'reachable_time': 30186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 142943, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:18:20 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:18:20.370 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[de97267b-3530-4c0f-b8df-31eda6e3f93e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcfb6ba7b-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 151010, 'tstamp': 151010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 142945, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcfb6ba7b-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 151014, 'tstamp': 151014}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 142945, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:18:20 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:18:20.372 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfb6ba7b-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:18:20 compute-0 nova_compute[117413]: 2025-10-08 16:18:20.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:20 compute-0 nova_compute[117413]: 2025-10-08 16:18:20.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:20 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:18:20.378 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfb6ba7b-50, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:18:20 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:18:20.379 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:18:20 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:18:20.379 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfb6ba7b-50, col_values=(('external_ids', {'iface-id': 'bc02923c-7f95-45ae-9ad1-1ed85859f940'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:18:20 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:18:20.379 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:18:20 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:18:20.381 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[1793a59c-39dc-4f8f-95d9-e27e31a9bf8f]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:18:20 compute-0 nova_compute[117413]: 2025-10-08 16:18:20.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:20 compute-0 nova_compute[117413]: 2025-10-08 16:18:20.990 2 DEBUG nova.compute.manager [req-ac89ffc8-3d4b-4d07-95f3-7ca20514f50a req-6e910787-b74b-4c68-9d49-c656696c6764 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Received event network-vif-plugged-8ec197ec-6e84-4cdb-8907-b92c269e3285 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:18:20 compute-0 nova_compute[117413]: 2025-10-08 16:18:20.991 2 DEBUG oslo_concurrency.lockutils [req-ac89ffc8-3d4b-4d07-95f3-7ca20514f50a req-6e910787-b74b-4c68-9d49-c656696c6764 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:18:20 compute-0 nova_compute[117413]: 2025-10-08 16:18:20.992 2 DEBUG oslo_concurrency.lockutils [req-ac89ffc8-3d4b-4d07-95f3-7ca20514f50a req-6e910787-b74b-4c68-9d49-c656696c6764 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:18:20 compute-0 nova_compute[117413]: 2025-10-08 16:18:20.992 2 DEBUG oslo_concurrency.lockutils [req-ac89ffc8-3d4b-4d07-95f3-7ca20514f50a req-6e910787-b74b-4c68-9d49-c656696c6764 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:18:20 compute-0 nova_compute[117413]: 2025-10-08 16:18:20.992 2 DEBUG nova.compute.manager [req-ac89ffc8-3d4b-4d07-95f3-7ca20514f50a req-6e910787-b74b-4c68-9d49-c656696c6764 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Processing event network-vif-plugged-8ec197ec-6e84-4cdb-8907-b92c269e3285 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 08 16:18:21 compute-0 nova_compute[117413]: 2025-10-08 16:18:21.383 2 DEBUG nova.compute.manager [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 08 16:18:21 compute-0 nova_compute[117413]: 2025-10-08 16:18:21.387 2 DEBUG nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 08 16:18:21 compute-0 nova_compute[117413]: 2025-10-08 16:18:21.391 2 INFO nova.virt.libvirt.driver [-] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Instance spawned successfully.
Oct 08 16:18:21 compute-0 nova_compute[117413]: 2025-10-08 16:18:21.392 2 DEBUG nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 08 16:18:21 compute-0 nova_compute[117413]: 2025-10-08 16:18:21.906 2 DEBUG nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:18:21 compute-0 nova_compute[117413]: 2025-10-08 16:18:21.907 2 DEBUG nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:18:21 compute-0 nova_compute[117413]: 2025-10-08 16:18:21.907 2 DEBUG nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:18:21 compute-0 nova_compute[117413]: 2025-10-08 16:18:21.908 2 DEBUG nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:18:21 compute-0 nova_compute[117413]: 2025-10-08 16:18:21.908 2 DEBUG nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:18:21 compute-0 nova_compute[117413]: 2025-10-08 16:18:21.908 2 DEBUG nova.virt.libvirt.driver [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:18:22 compute-0 nova_compute[117413]: 2025-10-08 16:18:22.419 2 INFO nova.compute.manager [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Took 9.60 seconds to spawn the instance on the hypervisor.
Oct 08 16:18:22 compute-0 nova_compute[117413]: 2025-10-08 16:18:22.419 2 DEBUG nova.compute.manager [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:18:22 compute-0 nova_compute[117413]: 2025-10-08 16:18:22.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:22 compute-0 nova_compute[117413]: 2025-10-08 16:18:22.962 2 INFO nova.compute.manager [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Took 14.82 seconds to build instance.
Oct 08 16:18:23 compute-0 nova_compute[117413]: 2025-10-08 16:18:23.046 2 DEBUG nova.compute.manager [req-260abded-f545-4694-b10d-1dbecc10a511 req-1349a38c-accb-4807-bcad-a1302165cbb1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Received event network-vif-plugged-8ec197ec-6e84-4cdb-8907-b92c269e3285 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:18:23 compute-0 nova_compute[117413]: 2025-10-08 16:18:23.047 2 DEBUG oslo_concurrency.lockutils [req-260abded-f545-4694-b10d-1dbecc10a511 req-1349a38c-accb-4807-bcad-a1302165cbb1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:18:23 compute-0 nova_compute[117413]: 2025-10-08 16:18:23.047 2 DEBUG oslo_concurrency.lockutils [req-260abded-f545-4694-b10d-1dbecc10a511 req-1349a38c-accb-4807-bcad-a1302165cbb1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:18:23 compute-0 nova_compute[117413]: 2025-10-08 16:18:23.048 2 DEBUG oslo_concurrency.lockutils [req-260abded-f545-4694-b10d-1dbecc10a511 req-1349a38c-accb-4807-bcad-a1302165cbb1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:18:23 compute-0 nova_compute[117413]: 2025-10-08 16:18:23.048 2 DEBUG nova.compute.manager [req-260abded-f545-4694-b10d-1dbecc10a511 req-1349a38c-accb-4807-bcad-a1302165cbb1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] No waiting events found dispatching network-vif-plugged-8ec197ec-6e84-4cdb-8907-b92c269e3285 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:18:23 compute-0 nova_compute[117413]: 2025-10-08 16:18:23.049 2 WARNING nova.compute.manager [req-260abded-f545-4694-b10d-1dbecc10a511 req-1349a38c-accb-4807-bcad-a1302165cbb1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Received unexpected event network-vif-plugged-8ec197ec-6e84-4cdb-8907-b92c269e3285 for instance with vm_state active and task_state None.
Oct 08 16:18:23 compute-0 nova_compute[117413]: 2025-10-08 16:18:23.468 2 DEBUG oslo_concurrency.lockutils [None req-c7e59b71-b7a6-4a7c-9b66-f072f6b2c132 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.345s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:18:25 compute-0 nova_compute[117413]: 2025-10-08 16:18:25.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:27 compute-0 podman[142953]: 2025-10-08 16:18:27.471383514 +0000 UTC m=+0.073174115 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 08 16:18:27 compute-0 nova_compute[117413]: 2025-10-08 16:18:27.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:29 compute-0 podman[142973]: 2025-10-08 16:18:29.453536199 +0000 UTC m=+0.059399253 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, name=ubi9-minimal, maintainer=Red Hat, Inc.)
Oct 08 16:18:29 compute-0 podman[127881]: time="2025-10-08T16:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:18:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:18:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3483 "" "Go-http-client/1.1"
Oct 08 16:18:30 compute-0 nova_compute[117413]: 2025-10-08 16:18:30.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:31 compute-0 openstack_network_exporter[130039]: ERROR   16:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:18:31 compute-0 openstack_network_exporter[130039]: ERROR   16:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:18:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:18:31 compute-0 openstack_network_exporter[130039]: ERROR   16:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:18:31 compute-0 openstack_network_exporter[130039]: ERROR   16:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:18:31 compute-0 openstack_network_exporter[130039]: ERROR   16:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:18:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:18:32 compute-0 nova_compute[117413]: 2025-10-08 16:18:32.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:33 compute-0 ovn_controller[19768]: 2025-10-08T16:18:33Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:93:50:fa 10.100.0.3
Oct 08 16:18:33 compute-0 ovn_controller[19768]: 2025-10-08T16:18:33Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:50:fa 10.100.0.3
Oct 08 16:18:35 compute-0 nova_compute[117413]: 2025-10-08 16:18:35.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:36 compute-0 podman[143005]: 2025-10-08 16:18:36.452797097 +0000 UTC m=+0.057778137 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:18:37 compute-0 nova_compute[117413]: 2025-10-08 16:18:37.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:40 compute-0 nova_compute[117413]: 2025-10-08 16:18:40.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:41 compute-0 podman[143025]: 2025-10-08 16:18:41.446715199 +0000 UTC m=+0.054302398 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 08 16:18:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:18:41.888 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:18:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:18:41.889 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:18:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:18:41.889 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:18:42 compute-0 nova_compute[117413]: 2025-10-08 16:18:42.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:45 compute-0 nova_compute[117413]: 2025-10-08 16:18:45.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:46 compute-0 podman[143045]: 2025-10-08 16:18:46.462260409 +0000 UTC m=+0.065828096 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:18:46 compute-0 podman[143046]: 2025-10-08 16:18:46.505363017 +0000 UTC m=+0.104938510 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 08 16:18:47 compute-0 nova_compute[117413]: 2025-10-08 16:18:47.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:50 compute-0 nova_compute[117413]: 2025-10-08 16:18:50.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:52 compute-0 nova_compute[117413]: 2025-10-08 16:18:52.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:52 compute-0 nova_compute[117413]: 2025-10-08 16:18:52.892 2 DEBUG oslo_concurrency.lockutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "0429af3d-07c5-445e-bc3d-8df845af8e75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:18:52 compute-0 nova_compute[117413]: 2025-10-08 16:18:52.893 2 DEBUG oslo_concurrency.lockutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "0429af3d-07c5-445e-bc3d-8df845af8e75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:18:53 compute-0 nova_compute[117413]: 2025-10-08 16:18:53.398 2 DEBUG nova.compute.manager [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 08 16:18:53 compute-0 nova_compute[117413]: 2025-10-08 16:18:53.948 2 DEBUG oslo_concurrency.lockutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:18:53 compute-0 nova_compute[117413]: 2025-10-08 16:18:53.949 2 DEBUG oslo_concurrency.lockutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:18:53 compute-0 nova_compute[117413]: 2025-10-08 16:18:53.957 2 DEBUG nova.virt.hardware [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 08 16:18:53 compute-0 nova_compute[117413]: 2025-10-08 16:18:53.958 2 INFO nova.compute.claims [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Claim successful on node compute-0.ctlplane.example.com
Oct 08 16:18:55 compute-0 nova_compute[117413]: 2025-10-08 16:18:55.059 2 DEBUG nova.compute.provider_tree [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:18:55 compute-0 nova_compute[117413]: 2025-10-08 16:18:55.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:18:55 compute-0 nova_compute[117413]: 2025-10-08 16:18:55.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:18:55 compute-0 nova_compute[117413]: 2025-10-08 16:18:55.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:18:55 compute-0 nova_compute[117413]: 2025-10-08 16:18:55.567 2 DEBUG nova.scheduler.client.report [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:18:55 compute-0 nova_compute[117413]: 2025-10-08 16:18:55.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:55 compute-0 nova_compute[117413]: 2025-10-08 16:18:55.873 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:18:56 compute-0 nova_compute[117413]: 2025-10-08 16:18:56.079 2 DEBUG oslo_concurrency.lockutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.130s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:18:56 compute-0 nova_compute[117413]: 2025-10-08 16:18:56.080 2 DEBUG nova.compute.manager [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 08 16:18:56 compute-0 nova_compute[117413]: 2025-10-08 16:18:56.084 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.211s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:18:56 compute-0 nova_compute[117413]: 2025-10-08 16:18:56.085 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:18:56 compute-0 nova_compute[117413]: 2025-10-08 16:18:56.085 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:18:56 compute-0 nova_compute[117413]: 2025-10-08 16:18:56.594 2 DEBUG nova.compute.manager [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 08 16:18:56 compute-0 nova_compute[117413]: 2025-10-08 16:18:56.595 2 DEBUG nova.network.neutron [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 08 16:18:56 compute-0 nova_compute[117413]: 2025-10-08 16:18:56.595 2 WARNING neutronclient.v2_0.client [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:18:56 compute-0 nova_compute[117413]: 2025-10-08 16:18:56.596 2 WARNING neutronclient.v2_0.client [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.103 2 INFO nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.134 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.206 2 DEBUG nova.network.neutron [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Successfully created port: 14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.228 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.229 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.313 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.319 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.374 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.375 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.434 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.561 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.562 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.578 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.016s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.579 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5876MB free_disk=73.21331024169922GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.579 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.580 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.612 2 DEBUG nova.compute.manager [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.851 2 DEBUG nova.network.neutron [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Successfully updated port: 14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.891 2 DEBUG nova.compute.manager [req-9cd35213-5881-4f3f-a2f3-ad23e95681e7 req-c4d84897-45cb-4ce8-a13b-7545a126e185 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Received event network-changed-14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.892 2 DEBUG nova.compute.manager [req-9cd35213-5881-4f3f-a2f3-ad23e95681e7 req-c4d84897-45cb-4ce8-a13b-7545a126e185 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Refreshing instance network info cache due to event network-changed-14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.892 2 DEBUG oslo_concurrency.lockutils [req-9cd35213-5881-4f3f-a2f3-ad23e95681e7 req-c4d84897-45cb-4ce8-a13b-7545a126e185 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-0429af3d-07c5-445e-bc3d-8df845af8e75" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.893 2 DEBUG oslo_concurrency.lockutils [req-9cd35213-5881-4f3f-a2f3-ad23e95681e7 req-c4d84897-45cb-4ce8-a13b-7545a126e185 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-0429af3d-07c5-445e-bc3d-8df845af8e75" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:18:57 compute-0 nova_compute[117413]: 2025-10-08 16:18:57.893 2 DEBUG nova.network.neutron [req-9cd35213-5881-4f3f-a2f3-ad23e95681e7 req-c4d84897-45cb-4ce8-a13b-7545a126e185 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Refreshing network info cache for port 14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.358 2 DEBUG oslo_concurrency.lockutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "refresh_cache-0429af3d-07c5-445e-bc3d-8df845af8e75" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.400 2 WARNING neutronclient.v2_0.client [req-9cd35213-5881-4f3f-a2f3-ad23e95681e7 req-c4d84897-45cb-4ce8-a13b-7545a126e185 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:18:58 compute-0 podman[143107]: 2025-10-08 16:18:58.47761724 +0000 UTC m=+0.071229320 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.479 2 DEBUG nova.network.neutron [req-9cd35213-5881-4f3f-a2f3-ad23e95681e7 req-c4d84897-45cb-4ce8-a13b-7545a126e185 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.618 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance 0f6f8aa7-8a43-4471-afed-4203d5b80b4c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.618 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance a11dbe8f-56c0-469a-91dc-1f2104aedd13 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.618 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance 0429af3d-07c5-445e-bc3d-8df845af8e75 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.618 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.619 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:18:57 up 27 min,  0 user,  load average: 0.17, 0.18, 0.26\n', 'num_instances': '3', 'num_vm_active': '2', 'num_task_None': '2', 'num_os_type_None': '3', 'num_proj_36f986860cbf4338bf6afd8aa7b4d147': '3', 'io_workload': '1', 'num_vm_building': '1', 'num_task_networking': '1'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.628 2 DEBUG nova.compute.manager [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.629 2 DEBUG nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.629 2 INFO nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Creating image(s)
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.630 2 DEBUG oslo_concurrency.lockutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "/var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.630 2 DEBUG oslo_concurrency.lockutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "/var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.631 2 DEBUG oslo_concurrency.lockutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "/var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.631 2 DEBUG oslo_utils.imageutils.format_inspector [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.634 2 DEBUG oslo_utils.imageutils.format_inspector [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.635 2 DEBUG nova.network.neutron [req-9cd35213-5881-4f3f-a2f3-ad23e95681e7 req-c4d84897-45cb-4ce8-a13b-7545a126e185 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.639 2 DEBUG oslo_concurrency.processutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.682 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.689 2 DEBUG oslo_concurrency.processutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.690 2 DEBUG oslo_concurrency.lockutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.691 2 DEBUG oslo_concurrency.lockutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.692 2 DEBUG oslo_utils.imageutils.format_inspector [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.697 2 DEBUG oslo_utils.imageutils.format_inspector [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.697 2 DEBUG oslo_concurrency.processutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.754 2 DEBUG oslo_concurrency.processutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.755 2 DEBUG oslo_concurrency.processutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.790 2 DEBUG oslo_concurrency.processutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75/disk 1073741824" returned: 0 in 0.035s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.791 2 DEBUG oslo_concurrency.lockutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.100s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.791 2 DEBUG oslo_concurrency.processutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.841 2 DEBUG oslo_concurrency.processutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.842 2 DEBUG nova.virt.disk.api [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Checking if we can resize image /var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.843 2 DEBUG oslo_concurrency.processutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.897 2 DEBUG oslo_concurrency.processutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.898 2 DEBUG nova.virt.disk.api [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Cannot resize image /var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.898 2 DEBUG nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.899 2 DEBUG nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Ensure instance console log exists: /var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.899 2 DEBUG oslo_concurrency.lockutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.899 2 DEBUG oslo_concurrency.lockutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:18:58 compute-0 nova_compute[117413]: 2025-10-08 16:18:58.900 2 DEBUG oslo_concurrency.lockutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:18:59 compute-0 nova_compute[117413]: 2025-10-08 16:18:59.147 2 DEBUG oslo_concurrency.lockutils [req-9cd35213-5881-4f3f-a2f3-ad23e95681e7 req-c4d84897-45cb-4ce8-a13b-7545a126e185 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-0429af3d-07c5-445e-bc3d-8df845af8e75" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:18:59 compute-0 nova_compute[117413]: 2025-10-08 16:18:59.148 2 DEBUG oslo_concurrency.lockutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquired lock "refresh_cache-0429af3d-07c5-445e-bc3d-8df845af8e75" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:18:59 compute-0 nova_compute[117413]: 2025-10-08 16:18:59.148 2 DEBUG nova.network.neutron [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:18:59 compute-0 nova_compute[117413]: 2025-10-08 16:18:59.190 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:18:59 compute-0 nova_compute[117413]: 2025-10-08 16:18:59.700 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:18:59 compute-0 nova_compute[117413]: 2025-10-08 16:18:59.701 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.121s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:18:59 compute-0 podman[127881]: time="2025-10-08T16:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:18:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:18:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3484 "" "Go-http-client/1.1"
Oct 08 16:18:59 compute-0 nova_compute[117413]: 2025-10-08 16:18:59.854 2 DEBUG nova.network.neutron [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.031 2 WARNING neutronclient.v2_0.client [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.186 2 DEBUG nova.network.neutron [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Updating instance_info_cache with network_info: [{"id": "14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e", "address": "fa:16:3e:26:63:14", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14b9a922-7b", "ovs_interfaceid": "14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:19:00 compute-0 podman[143143]: 2025-10-08 16:19:00.486572808 +0000 UTC m=+0.082859921 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, architecture=x86_64, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.695 2 DEBUG oslo_concurrency.lockutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Releasing lock "refresh_cache-0429af3d-07c5-445e-bc3d-8df845af8e75" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.696 2 DEBUG nova.compute.manager [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Instance network_info: |[{"id": "14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e", "address": "fa:16:3e:26:63:14", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14b9a922-7b", "ovs_interfaceid": "14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.696 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.698 2 DEBUG nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Start _get_guest_xml network_info=[{"id": "14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e", "address": "fa:16:3e:26:63:14", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14b9a922-7b", "ovs_interfaceid": "14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'boot_index': 0, 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': '44390e9d-4b05-4916-9ba9-97b19c79ef43'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.699 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.700 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.700 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.700 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.700 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.703 2 WARNING nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.705 2 DEBUG nova.virt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='44390e9d-4b05-4916-9ba9-97b19c79ef43', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteActionsViaActuator-server-182345068', uuid='0429af3d-07c5-445e-bc3d-8df845af8e75'), owner=OwnerMeta(userid='723962be4e3d48efb441d80077ac4263', username='tempest-TestExecuteActionsViaActuator-898376163-project-admin', projectid='36f986860cbf4338bf6afd8aa7b4d147', projectname='tempest-TestExecuteActionsViaActuator-898376163'), image=ImageMeta(id='44390e9d-4b05-4916-9ba9-97b19c79ef43', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='43cd5d45-bd07-4889-a671-dd23291090c1', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e", "address": "fa:16:3e:26:63:14", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14b9a922-7b", "ovs_interfaceid": "14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008114656.23cad1d.el10', creation_time=1759940340.70499) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.709 2 DEBUG nova.virt.libvirt.host [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.709 2 DEBUG nova.virt.libvirt.host [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.712 2 DEBUG nova.virt.libvirt.host [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.712 2 DEBUG nova.virt.libvirt.host [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.712 2 DEBUG nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.713 2 DEBUG nova.virt.hardware [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-08T16:08:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='43cd5d45-bd07-4889-a671-dd23291090c1',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.713 2 DEBUG nova.virt.hardware [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.713 2 DEBUG nova.virt.hardware [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.713 2 DEBUG nova.virt.hardware [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.714 2 DEBUG nova.virt.hardware [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.714 2 DEBUG nova.virt.hardware [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.714 2 DEBUG nova.virt.hardware [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.714 2 DEBUG nova.virt.hardware [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.714 2 DEBUG nova.virt.hardware [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.715 2 DEBUG nova.virt.hardware [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.715 2 DEBUG nova.virt.hardware [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.718 2 DEBUG nova.virt.libvirt.vif [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:18:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-182345068',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-182345068',id=9,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36f986860cbf4338bf6afd8aa7b4d147',ramdisk_id='',reservation_id='r-pwekqbnq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-898376163',owner_user_name='tempest-TestExecuteActionsViaActuator-898376163-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:18:57Z,user_data=None,user_id='723962be4e3d48efb441d80077ac4263',uuid=0429af3d-07c5-445e-bc3d-8df845af8e75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e", "address": "fa:16:3e:26:63:14", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14b9a922-7b", "ovs_interfaceid": "14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.718 2 DEBUG nova.network.os_vif_util [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converting VIF {"id": "14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e", "address": "fa:16:3e:26:63:14", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14b9a922-7b", "ovs_interfaceid": "14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.719 2 DEBUG nova.network.os_vif_util [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:63:14,bridge_name='br-int',has_traffic_filtering=True,id=14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14b9a922-7b') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.720 2 DEBUG nova.objects.instance [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0429af3d-07c5-445e-bc3d-8df845af8e75 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:19:00 compute-0 nova_compute[117413]: 2025-10-08 16:19:00.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.228 2 DEBUG nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] End _get_guest_xml xml=<domain type="kvm">
Oct 08 16:19:01 compute-0 nova_compute[117413]:   <uuid>0429af3d-07c5-445e-bc3d-8df845af8e75</uuid>
Oct 08 16:19:01 compute-0 nova_compute[117413]:   <name>instance-00000009</name>
Oct 08 16:19:01 compute-0 nova_compute[117413]:   <memory>131072</memory>
Oct 08 16:19:01 compute-0 nova_compute[117413]:   <vcpu>1</vcpu>
Oct 08 16:19:01 compute-0 nova_compute[117413]:   <metadata>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <nova:package version="32.1.0-0.20251008114656.23cad1d.el10"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <nova:name>tempest-TestExecuteActionsViaActuator-server-182345068</nova:name>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <nova:creationTime>2025-10-08 16:19:00</nova:creationTime>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <nova:flavor name="m1.nano" id="43cd5d45-bd07-4889-a671-dd23291090c1">
Oct 08 16:19:01 compute-0 nova_compute[117413]:         <nova:memory>128</nova:memory>
Oct 08 16:19:01 compute-0 nova_compute[117413]:         <nova:disk>1</nova:disk>
Oct 08 16:19:01 compute-0 nova_compute[117413]:         <nova:swap>0</nova:swap>
Oct 08 16:19:01 compute-0 nova_compute[117413]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 16:19:01 compute-0 nova_compute[117413]:         <nova:vcpus>1</nova:vcpus>
Oct 08 16:19:01 compute-0 nova_compute[117413]:         <nova:extraSpecs>
Oct 08 16:19:01 compute-0 nova_compute[117413]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 08 16:19:01 compute-0 nova_compute[117413]:         </nova:extraSpecs>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       </nova:flavor>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <nova:image uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43">
Oct 08 16:19:01 compute-0 nova_compute[117413]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 08 16:19:01 compute-0 nova_compute[117413]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 08 16:19:01 compute-0 nova_compute[117413]:         <nova:minDisk>1</nova:minDisk>
Oct 08 16:19:01 compute-0 nova_compute[117413]:         <nova:minRam>0</nova:minRam>
Oct 08 16:19:01 compute-0 nova_compute[117413]:         <nova:properties>
Oct 08 16:19:01 compute-0 nova_compute[117413]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 08 16:19:01 compute-0 nova_compute[117413]:         </nova:properties>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       </nova:image>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <nova:owner>
Oct 08 16:19:01 compute-0 nova_compute[117413]:         <nova:user uuid="723962be4e3d48efb441d80077ac4263">tempest-TestExecuteActionsViaActuator-898376163-project-admin</nova:user>
Oct 08 16:19:01 compute-0 nova_compute[117413]:         <nova:project uuid="36f986860cbf4338bf6afd8aa7b4d147">tempest-TestExecuteActionsViaActuator-898376163</nova:project>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       </nova:owner>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <nova:root type="image" uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <nova:ports>
Oct 08 16:19:01 compute-0 nova_compute[117413]:         <nova:port uuid="14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e">
Oct 08 16:19:01 compute-0 nova_compute[117413]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:         </nova:port>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       </nova:ports>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     </nova:instance>
Oct 08 16:19:01 compute-0 nova_compute[117413]:   </metadata>
Oct 08 16:19:01 compute-0 nova_compute[117413]:   <sysinfo type="smbios">
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <system>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <entry name="manufacturer">RDO</entry>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <entry name="product">OpenStack Compute</entry>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <entry name="version">32.1.0-0.20251008114656.23cad1d.el10</entry>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <entry name="serial">0429af3d-07c5-445e-bc3d-8df845af8e75</entry>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <entry name="uuid">0429af3d-07c5-445e-bc3d-8df845af8e75</entry>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <entry name="family">Virtual Machine</entry>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     </system>
Oct 08 16:19:01 compute-0 nova_compute[117413]:   </sysinfo>
Oct 08 16:19:01 compute-0 nova_compute[117413]:   <os>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <boot dev="hd"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <smbios mode="sysinfo"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:   </os>
Oct 08 16:19:01 compute-0 nova_compute[117413]:   <features>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <acpi/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <apic/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <vmcoreinfo/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:   </features>
Oct 08 16:19:01 compute-0 nova_compute[117413]:   <clock offset="utc">
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <timer name="hpet" present="no"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:   </clock>
Oct 08 16:19:01 compute-0 nova_compute[117413]:   <cpu mode="host-model" match="exact">
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <topology sockets="1" cores="1" threads="1"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:19:01 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <disk type="file" device="disk">
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75/disk"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <target dev="vda" bus="virtio"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <disk type="file" device="cdrom">
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <driver name="qemu" type="raw" cache="none"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75/disk.config"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <target dev="sda" bus="sata"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <interface type="ethernet">
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <mac address="fa:16:3e:26:63:14"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <driver name="vhost" rx_queue_size="512"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <mtu size="1442"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <target dev="tap14b9a922-7b"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     </interface>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <serial type="pty">
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75/console.log" append="off"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     </serial>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <video>
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     </video>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <input type="tablet" bus="usb"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <rng model="virtio">
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <backend model="random">/dev/urandom</backend>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <controller type="usb" index="0"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 08 16:19:01 compute-0 nova_compute[117413]:       <stats period="10"/>
Oct 08 16:19:01 compute-0 nova_compute[117413]:     </memballoon>
Oct 08 16:19:01 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:19:01 compute-0 nova_compute[117413]: </domain>
Oct 08 16:19:01 compute-0 nova_compute[117413]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.230 2 DEBUG nova.compute.manager [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Preparing to wait for external event network-vif-plugged-14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.231 2 DEBUG oslo_concurrency.lockutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "0429af3d-07c5-445e-bc3d-8df845af8e75-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.231 2 DEBUG oslo_concurrency.lockutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "0429af3d-07c5-445e-bc3d-8df845af8e75-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.231 2 DEBUG oslo_concurrency.lockutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "0429af3d-07c5-445e-bc3d-8df845af8e75-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.232 2 DEBUG nova.virt.libvirt.vif [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:18:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-182345068',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-182345068',id=9,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36f986860cbf4338bf6afd8aa7b4d147',ramdisk_id='',reservation_id='r-pwekqbnq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-898376163',owner_user_name='tempest-TestExecuteActionsViaActuator-898376163-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:18:57Z,user_data=None,user_id='723962be4e3d48efb441d80077ac4263',uuid=0429af3d-07c5-445e-bc3d-8df845af8e75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e", "address": "fa:16:3e:26:63:14", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14b9a922-7b", "ovs_interfaceid": "14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.232 2 DEBUG nova.network.os_vif_util [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converting VIF {"id": "14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e", "address": "fa:16:3e:26:63:14", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14b9a922-7b", "ovs_interfaceid": "14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.232 2 DEBUG nova.network.os_vif_util [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:63:14,bridge_name='br-int',has_traffic_filtering=True,id=14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14b9a922-7b') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.233 2 DEBUG os_vif [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:63:14,bridge_name='br-int',has_traffic_filtering=True,id=14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14b9a922-7b') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.233 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.234 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.234 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '826ea4b2-b346-598e-813c-ac394fd5f545', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.240 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14b9a922-7b, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.240 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap14b9a922-7b, col_values=(('qos', UUID('886fde64-d1f6-4682-8a87-cfdc90f61137')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.240 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap14b9a922-7b, col_values=(('external_ids', {'iface-id': '14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:63:14', 'vm-uuid': '0429af3d-07c5-445e-bc3d-8df845af8e75'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:01 compute-0 NetworkManager[1034]: <info>  [1759940341.2433] manager: (tap14b9a922-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.249 2 INFO os_vif [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:63:14,bridge_name='br-int',has_traffic_filtering=True,id=14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14b9a922-7b')
Oct 08 16:19:01 compute-0 nova_compute[117413]: 2025-10-08 16:19:01.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:19:01 compute-0 openstack_network_exporter[130039]: ERROR   16:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:19:01 compute-0 openstack_network_exporter[130039]: ERROR   16:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:19:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:19:01 compute-0 openstack_network_exporter[130039]: ERROR   16:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:19:01 compute-0 openstack_network_exporter[130039]: ERROR   16:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:19:01 compute-0 openstack_network_exporter[130039]: ERROR   16:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:19:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:19:02 compute-0 nova_compute[117413]: 2025-10-08 16:19:02.784 2 DEBUG nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:19:02 compute-0 nova_compute[117413]: 2025-10-08 16:19:02.785 2 DEBUG nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:19:02 compute-0 nova_compute[117413]: 2025-10-08 16:19:02.785 2 DEBUG nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] No VIF found with MAC fa:16:3e:26:63:14, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 08 16:19:02 compute-0 nova_compute[117413]: 2025-10-08 16:19:02.785 2 INFO nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Using config drive
Oct 08 16:19:03 compute-0 nova_compute[117413]: 2025-10-08 16:19:03.295 2 WARNING neutronclient.v2_0.client [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:19:03 compute-0 nova_compute[117413]: 2025-10-08 16:19:03.937 2 INFO nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Creating config drive at /var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75/disk.config
Oct 08 16:19:03 compute-0 nova_compute[117413]: 2025-10-08 16:19:03.942 2 DEBUG oslo_concurrency.processutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmpyhkylxbo execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:19:04 compute-0 nova_compute[117413]: 2025-10-08 16:19:04.071 2 DEBUG oslo_concurrency.processutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmpyhkylxbo" returned: 0 in 0.129s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:19:04 compute-0 kernel: tap14b9a922-7b: entered promiscuous mode
Oct 08 16:19:04 compute-0 NetworkManager[1034]: <info>  [1759940344.1268] manager: (tap14b9a922-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/33)
Oct 08 16:19:04 compute-0 ovn_controller[19768]: 2025-10-08T16:19:04Z|00058|binding|INFO|Claiming lport 14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e for this chassis.
Oct 08 16:19:04 compute-0 ovn_controller[19768]: 2025-10-08T16:19:04Z|00059|binding|INFO|14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e: Claiming fa:16:3e:26:63:14 10.100.0.12
Oct 08 16:19:04 compute-0 nova_compute[117413]: 2025-10-08 16:19:04.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:04.136 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:63:14 10.100.0.12'], port_security=['fa:16:3e:26:63:14 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '0429af3d-07c5-445e-bc3d-8df845af8e75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36f986860cbf4338bf6afd8aa7b4d147', 'neutron:revision_number': '4', 'neutron:security_group_ids': '215a932b-a88a-4280-bc86-df394b56782c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c20bcb7-facc-40e4-a92a-7c3dfec236b2, chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:19:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:04.137 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e in datapath cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b bound to our chassis
Oct 08 16:19:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:04.139 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b
Oct 08 16:19:04 compute-0 ovn_controller[19768]: 2025-10-08T16:19:04Z|00060|binding|INFO|Setting lport 14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e ovn-installed in OVS
Oct 08 16:19:04 compute-0 ovn_controller[19768]: 2025-10-08T16:19:04Z|00061|binding|INFO|Setting lport 14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e up in Southbound
Oct 08 16:19:04 compute-0 nova_compute[117413]: 2025-10-08 16:19:04.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:04 compute-0 nova_compute[117413]: 2025-10-08 16:19:04.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:04.158 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[caa7ff6b-9dbd-4a09-a3af-dc3037d0e7ca]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:04 compute-0 systemd-udevd[143184]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:19:04 compute-0 systemd-machined[77548]: New machine qemu-4-instance-00000009.
Oct 08 16:19:04 compute-0 NetworkManager[1034]: <info>  [1759940344.1758] device (tap14b9a922-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:19:04 compute-0 NetworkManager[1034]: <info>  [1759940344.1770] device (tap14b9a922-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:19:04 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000009.
Oct 08 16:19:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:04.196 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[73087df8-5ccc-46ff-9240-1ba6b136cf96]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:04.199 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[a6749d73-7517-4037-9cf1-1cb80bf1de94]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:04.235 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[bf4d9bb8-be4f-4390-bba2-b91964ddda0f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:04.255 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[40f46bcd-fcae-4c60-b4c6-42c07e136569]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfb6ba7b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0b:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 150997, 'reachable_time': 30186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 143194, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:04.275 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[66bca537-56f9-428b-9a39-ea8db6066f70]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcfb6ba7b-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 151010, 'tstamp': 151010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143198, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcfb6ba7b-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 151014, 'tstamp': 151014}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143198, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:04.277 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfb6ba7b-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:04 compute-0 nova_compute[117413]: 2025-10-08 16:19:04.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:04 compute-0 nova_compute[117413]: 2025-10-08 16:19:04.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:04.282 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfb6ba7b-50, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:04.283 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:19:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:04.283 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfb6ba7b-50, col_values=(('external_ids', {'iface-id': 'bc02923c-7f95-45ae-9ad1-1ed85859f940'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:04.283 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:19:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:04.285 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[b89bef88-9a9c-4b02-bb27-af66568e4247]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:04 compute-0 nova_compute[117413]: 2025-10-08 16:19:04.867 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:19:04 compute-0 nova_compute[117413]: 2025-10-08 16:19:04.868 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Oct 08 16:19:04 compute-0 nova_compute[117413]: 2025-10-08 16:19:04.968 2 DEBUG nova.compute.manager [req-db82c0c7-1183-469e-8cff-dea87ff5146c req-4d14dc07-5aeb-4cab-af57-4882899a3af7 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Received event network-vif-plugged-14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:19:04 compute-0 nova_compute[117413]: 2025-10-08 16:19:04.969 2 DEBUG oslo_concurrency.lockutils [req-db82c0c7-1183-469e-8cff-dea87ff5146c req-4d14dc07-5aeb-4cab-af57-4882899a3af7 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "0429af3d-07c5-445e-bc3d-8df845af8e75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:19:04 compute-0 nova_compute[117413]: 2025-10-08 16:19:04.970 2 DEBUG oslo_concurrency.lockutils [req-db82c0c7-1183-469e-8cff-dea87ff5146c req-4d14dc07-5aeb-4cab-af57-4882899a3af7 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0429af3d-07c5-445e-bc3d-8df845af8e75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:19:04 compute-0 nova_compute[117413]: 2025-10-08 16:19:04.970 2 DEBUG oslo_concurrency.lockutils [req-db82c0c7-1183-469e-8cff-dea87ff5146c req-4d14dc07-5aeb-4cab-af57-4882899a3af7 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0429af3d-07c5-445e-bc3d-8df845af8e75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:19:04 compute-0 nova_compute[117413]: 2025-10-08 16:19:04.970 2 DEBUG nova.compute.manager [req-db82c0c7-1183-469e-8cff-dea87ff5146c req-4d14dc07-5aeb-4cab-af57-4882899a3af7 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Processing event network-vif-plugged-14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 08 16:19:05 compute-0 nova_compute[117413]: 2025-10-08 16:19:05.093 2 DEBUG nova.compute.manager [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 08 16:19:05 compute-0 nova_compute[117413]: 2025-10-08 16:19:05.097 2 DEBUG nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 08 16:19:05 compute-0 nova_compute[117413]: 2025-10-08 16:19:05.100 2 INFO nova.virt.libvirt.driver [-] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Instance spawned successfully.
Oct 08 16:19:05 compute-0 nova_compute[117413]: 2025-10-08 16:19:05.100 2 DEBUG nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 08 16:19:05 compute-0 nova_compute[117413]: 2025-10-08 16:19:05.611 2 DEBUG nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:19:05 compute-0 nova_compute[117413]: 2025-10-08 16:19:05.612 2 DEBUG nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:19:05 compute-0 nova_compute[117413]: 2025-10-08 16:19:05.612 2 DEBUG nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:19:05 compute-0 nova_compute[117413]: 2025-10-08 16:19:05.612 2 DEBUG nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:19:05 compute-0 nova_compute[117413]: 2025-10-08 16:19:05.612 2 DEBUG nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:19:05 compute-0 nova_compute[117413]: 2025-10-08 16:19:05.613 2 DEBUG nova.virt.libvirt.driver [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:19:05 compute-0 nova_compute[117413]: 2025-10-08 16:19:05.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:05 compute-0 nova_compute[117413]: 2025-10-08 16:19:05.870 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:19:05 compute-0 nova_compute[117413]: 2025-10-08 16:19:05.871 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Oct 08 16:19:06 compute-0 nova_compute[117413]: 2025-10-08 16:19:06.120 2 INFO nova.compute.manager [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Took 7.49 seconds to spawn the instance on the hypervisor.
Oct 08 16:19:06 compute-0 nova_compute[117413]: 2025-10-08 16:19:06.121 2 DEBUG nova.compute.manager [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:19:06 compute-0 nova_compute[117413]: 2025-10-08 16:19:06.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:06 compute-0 nova_compute[117413]: 2025-10-08 16:19:06.377 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Oct 08 16:19:06 compute-0 nova_compute[117413]: 2025-10-08 16:19:06.651 2 INFO nova.compute.manager [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Took 12.74 seconds to build instance.
Oct 08 16:19:07 compute-0 nova_compute[117413]: 2025-10-08 16:19:07.028 2 DEBUG nova.compute.manager [req-c7b24151-0db6-4a71-aeca-e0275f94a604 req-df8dfdf8-fcac-47fb-b718-f868164c3348 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Received event network-vif-plugged-14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:19:07 compute-0 nova_compute[117413]: 2025-10-08 16:19:07.028 2 DEBUG oslo_concurrency.lockutils [req-c7b24151-0db6-4a71-aeca-e0275f94a604 req-df8dfdf8-fcac-47fb-b718-f868164c3348 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "0429af3d-07c5-445e-bc3d-8df845af8e75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:19:07 compute-0 nova_compute[117413]: 2025-10-08 16:19:07.029 2 DEBUG oslo_concurrency.lockutils [req-c7b24151-0db6-4a71-aeca-e0275f94a604 req-df8dfdf8-fcac-47fb-b718-f868164c3348 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0429af3d-07c5-445e-bc3d-8df845af8e75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:19:07 compute-0 nova_compute[117413]: 2025-10-08 16:19:07.029 2 DEBUG oslo_concurrency.lockutils [req-c7b24151-0db6-4a71-aeca-e0275f94a604 req-df8dfdf8-fcac-47fb-b718-f868164c3348 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0429af3d-07c5-445e-bc3d-8df845af8e75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:19:07 compute-0 nova_compute[117413]: 2025-10-08 16:19:07.029 2 DEBUG nova.compute.manager [req-c7b24151-0db6-4a71-aeca-e0275f94a604 req-df8dfdf8-fcac-47fb-b718-f868164c3348 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] No waiting events found dispatching network-vif-plugged-14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:19:07 compute-0 nova_compute[117413]: 2025-10-08 16:19:07.029 2 WARNING nova.compute.manager [req-c7b24151-0db6-4a71-aeca-e0275f94a604 req-df8dfdf8-fcac-47fb-b718-f868164c3348 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Received unexpected event network-vif-plugged-14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e for instance with vm_state active and task_state None.
Oct 08 16:19:07 compute-0 nova_compute[117413]: 2025-10-08 16:19:07.156 2 DEBUG oslo_concurrency.lockutils [None req-4c6e2e82-4fd4-46cf-bf8c-de124e9ec8ab 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "0429af3d-07c5-445e-bc3d-8df845af8e75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.263s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:19:07 compute-0 podman[143207]: 2025-10-08 16:19:07.498585889 +0000 UTC m=+0.082902083 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 08 16:19:10 compute-0 nova_compute[117413]: 2025-10-08 16:19:10.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:11 compute-0 nova_compute[117413]: 2025-10-08 16:19:11.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:12 compute-0 podman[143228]: 2025-10-08 16:19:12.493150419 +0000 UTC m=+0.087492582 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 16:19:15 compute-0 nova_compute[117413]: 2025-10-08 16:19:15.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:16 compute-0 nova_compute[117413]: 2025-10-08 16:19:16.058 2 DEBUG nova.compute.manager [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Stashing vm_state: active _prep_resize /usr/lib/python3.12/site-packages/nova/compute/manager.py:6169
Oct 08 16:19:16 compute-0 nova_compute[117413]: 2025-10-08 16:19:16.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:16 compute-0 nova_compute[117413]: 2025-10-08 16:19:16.598 2 DEBUG oslo_concurrency.lockutils [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:19:16 compute-0 nova_compute[117413]: 2025-10-08 16:19:16.599 2 DEBUG oslo_concurrency.lockutils [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:19:17 compute-0 nova_compute[117413]: 2025-10-08 16:19:17.115 2 DEBUG nova.objects.instance [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'pci_requests' on Instance uuid 792f361b-347c-4139-b0a6-9eace69ac31d obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:19:17 compute-0 ovn_controller[19768]: 2025-10-08T16:19:17Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:63:14 10.100.0.12
Oct 08 16:19:17 compute-0 ovn_controller[19768]: 2025-10-08T16:19:17Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:63:14 10.100.0.12
Oct 08 16:19:17 compute-0 nova_compute[117413]: 2025-10-08 16:19:17.454 2 DEBUG nova.virt.libvirt.driver [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Creating tmpfile /var/lib/nova/instances/tmplwkpvapi to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10944
Oct 08 16:19:17 compute-0 nova_compute[117413]: 2025-10-08 16:19:17.455 2 WARNING neutronclient.v2_0.client [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:19:17 compute-0 podman[143265]: 2025-10-08 16:19:17.471664933 +0000 UTC m=+0.066372822 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:19:17 compute-0 podman[143266]: 2025-10-08 16:19:17.507766892 +0000 UTC m=+0.104645312 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 08 16:19:17 compute-0 nova_compute[117413]: 2025-10-08 16:19:17.532 2 DEBUG nova.compute.manager [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=72704,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmplwkpvapi',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.12/site-packages/nova/compute/manager.py:9086
Oct 08 16:19:17 compute-0 nova_compute[117413]: 2025-10-08 16:19:17.552 2 DEBUG oslo_concurrency.lockutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:19:17 compute-0 nova_compute[117413]: 2025-10-08 16:19:17.553 2 DEBUG oslo_concurrency.lockutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:19:17 compute-0 nova_compute[117413]: 2025-10-08 16:19:17.625 2 DEBUG nova.virt.hardware [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 08 16:19:17 compute-0 nova_compute[117413]: 2025-10-08 16:19:17.626 2 INFO nova.compute.claims [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Claim successful on node compute-0.ctlplane.example.com
Oct 08 16:19:17 compute-0 nova_compute[117413]: 2025-10-08 16:19:17.626 2 DEBUG nova.objects.instance [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'resources' on Instance uuid 792f361b-347c-4139-b0a6-9eace69ac31d obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:19:18 compute-0 nova_compute[117413]: 2025-10-08 16:19:18.059 2 INFO nova.compute.rpcapi [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Automatically selected compute RPC version 6.4 from minimum service version 70
Oct 08 16:19:18 compute-0 nova_compute[117413]: 2025-10-08 16:19:18.060 2 DEBUG oslo_concurrency.lockutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:19:18 compute-0 nova_compute[117413]: 2025-10-08 16:19:18.135 2 DEBUG nova.objects.base [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Object Instance<792f361b-347c-4139-b0a6-9eace69ac31d> lazy-loaded attributes: pci_requests,resources wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 08 16:19:18 compute-0 nova_compute[117413]: 2025-10-08 16:19:18.135 2 DEBUG nova.objects.instance [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'numa_topology' on Instance uuid 792f361b-347c-4139-b0a6-9eace69ac31d obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:19:18 compute-0 nova_compute[117413]: 2025-10-08 16:19:18.643 2 DEBUG nova.objects.base [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Object Instance<792f361b-347c-4139-b0a6-9eace69ac31d> lazy-loaded attributes: pci_requests,resources,numa_topology wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 08 16:19:18 compute-0 nova_compute[117413]: 2025-10-08 16:19:18.644 2 DEBUG nova.objects.instance [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'pci_devices' on Instance uuid 792f361b-347c-4139-b0a6-9eace69ac31d obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:19:19 compute-0 nova_compute[117413]: 2025-10-08 16:19:19.151 2 DEBUG nova.objects.base [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Object Instance<792f361b-347c-4139-b0a6-9eace69ac31d> lazy-loaded attributes: pci_requests,resources,numa_topology,pci_devices wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 08 16:19:19 compute-0 nova_compute[117413]: 2025-10-08 16:19:19.667 2 INFO nova.compute.resource_tracker [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Updating resource usage from migration 8476a62c-830d-4a38-8ea6-80d37ffa7df0
Oct 08 16:19:19 compute-0 nova_compute[117413]: 2025-10-08 16:19:19.668 2 DEBUG nova.compute.resource_tracker [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Starting to track incoming migration 8476a62c-830d-4a38-8ea6-80d37ffa7df0 with flavor 43cd5d45-bd07-4889-a671-dd23291090c1 _update_usage_from_migration /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1536
Oct 08 16:19:20 compute-0 nova_compute[117413]: 2025-10-08 16:19:20.078 2 WARNING neutronclient.v2_0.client [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:19:20 compute-0 nova_compute[117413]: 2025-10-08 16:19:20.272 2 DEBUG nova.compute.provider_tree [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:19:20 compute-0 nova_compute[117413]: 2025-10-08 16:19:20.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:20 compute-0 nova_compute[117413]: 2025-10-08 16:19:20.808 2 DEBUG nova.scheduler.client.report [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:19:21 compute-0 nova_compute[117413]: 2025-10-08 16:19:21.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:21 compute-0 nova_compute[117413]: 2025-10-08 16:19:21.387 2 DEBUG oslo_concurrency.lockutils [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 4.788s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:19:21 compute-0 nova_compute[117413]: 2025-10-08 16:19:21.388 2 INFO nova.compute.manager [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Migrating
Oct 08 16:19:24 compute-0 nova_compute[117413]: 2025-10-08 16:19:24.806 2 DEBUG nova.compute.manager [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=72704,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmplwkpvapi',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='a7fde225-dfe9-46d6-a12e-df2beab37b0c',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9311
Oct 08 16:19:25 compute-0 sshd-session[143315]: Accepted publickey for nova from 192.168.122.101 port 52276 ssh2: ECDSA SHA256:I7ik7oXGa4FFlhvEKs86SSdzFx+FoJO9gKKWX5Y1pi4
Oct 08 16:19:25 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Oct 08 16:19:25 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 08 16:19:25 compute-0 systemd-logind[847]: New session 13 of user nova.
Oct 08 16:19:25 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 08 16:19:25 compute-0 systemd[1]: Starting User Manager for UID 42436...
Oct 08 16:19:25 compute-0 systemd[143319]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 08 16:19:25 compute-0 systemd[143319]: Queued start job for default target Main User Target.
Oct 08 16:19:25 compute-0 rsyslogd[1296]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 16:19:25 compute-0 systemd[143319]: Created slice User Application Slice.
Oct 08 16:19:25 compute-0 sshd-session[143315]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 08 16:19:25 compute-0 systemd[143319]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 08 16:19:25 compute-0 systemd[143319]: Started Daily Cleanup of User's Temporary Directories.
Oct 08 16:19:25 compute-0 systemd[143319]: Reached target Paths.
Oct 08 16:19:25 compute-0 systemd[143319]: Reached target Timers.
Oct 08 16:19:25 compute-0 systemd[143319]: Starting D-Bus User Message Bus Socket...
Oct 08 16:19:25 compute-0 systemd[143319]: Starting Create User's Volatile Files and Directories...
Oct 08 16:19:25 compute-0 systemd[143319]: Listening on D-Bus User Message Bus Socket.
Oct 08 16:19:25 compute-0 systemd[143319]: Reached target Sockets.
Oct 08 16:19:25 compute-0 systemd[143319]: Finished Create User's Volatile Files and Directories.
Oct 08 16:19:25 compute-0 systemd[143319]: Reached target Basic System.
Oct 08 16:19:25 compute-0 systemd[143319]: Reached target Main User Target.
Oct 08 16:19:25 compute-0 systemd[143319]: Startup finished in 148ms.
Oct 08 16:19:25 compute-0 systemd[1]: Started User Manager for UID 42436.
Oct 08 16:19:25 compute-0 systemd[1]: Started Session 13 of User nova.
Oct 08 16:19:25 compute-0 nova_compute[117413]: 2025-10-08 16:19:25.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:25 compute-0 sshd-session[143334]: Received disconnect from 192.168.122.101 port 52276:11: disconnected by user
Oct 08 16:19:25 compute-0 sshd-session[143334]: Disconnected from user nova 192.168.122.101 port 52276
Oct 08 16:19:25 compute-0 sshd-session[143315]: pam_unix(sshd:session): session closed for user nova
Oct 08 16:19:25 compute-0 nova_compute[117413]: 2025-10-08 16:19:25.823 2 DEBUG oslo_concurrency.lockutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-a7fde225-dfe9-46d6-a12e-df2beab37b0c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:19:25 compute-0 nova_compute[117413]: 2025-10-08 16:19:25.824 2 DEBUG oslo_concurrency.lockutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-a7fde225-dfe9-46d6-a12e-df2beab37b0c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:19:25 compute-0 nova_compute[117413]: 2025-10-08 16:19:25.824 2 DEBUG nova.network.neutron [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:19:25 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Oct 08 16:19:25 compute-0 systemd-logind[847]: Session 13 logged out. Waiting for processes to exit.
Oct 08 16:19:25 compute-0 systemd-logind[847]: Removed session 13.
Oct 08 16:19:25 compute-0 sshd-session[143337]: Accepted publickey for nova from 192.168.122.101 port 52288 ssh2: ECDSA SHA256:I7ik7oXGa4FFlhvEKs86SSdzFx+FoJO9gKKWX5Y1pi4
Oct 08 16:19:25 compute-0 systemd-logind[847]: New session 15 of user nova.
Oct 08 16:19:25 compute-0 systemd[1]: Started Session 15 of User nova.
Oct 08 16:19:25 compute-0 sshd-session[143337]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 08 16:19:26 compute-0 sshd-session[143340]: Received disconnect from 192.168.122.101 port 52288:11: disconnected by user
Oct 08 16:19:26 compute-0 sshd-session[143340]: Disconnected from user nova 192.168.122.101 port 52288
Oct 08 16:19:26 compute-0 sshd-session[143337]: pam_unix(sshd:session): session closed for user nova
Oct 08 16:19:26 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Oct 08 16:19:26 compute-0 systemd-logind[847]: Session 15 logged out. Waiting for processes to exit.
Oct 08 16:19:26 compute-0 systemd-logind[847]: Removed session 15.
Oct 08 16:19:26 compute-0 nova_compute[117413]: 2025-10-08 16:19:26.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:26 compute-0 nova_compute[117413]: 2025-10-08 16:19:26.332 2 WARNING neutronclient.v2_0.client [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:19:26 compute-0 nova_compute[117413]: 2025-10-08 16:19:26.711 2 WARNING neutronclient.v2_0.client [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:19:26 compute-0 nova_compute[117413]: 2025-10-08 16:19:26.945 2 DEBUG nova.network.neutron [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Updating instance_info_cache with network_info: [{"id": "96506174-a98c-4a5a-ae80-848833d70dbb", "address": "fa:16:3e:e8:2b:94", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96506174-a9", "ovs_interfaceid": "96506174-a98c-4a5a-ae80-848833d70dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:19:27 compute-0 nova_compute[117413]: 2025-10-08 16:19:27.456 2 DEBUG oslo_concurrency.lockutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-a7fde225-dfe9-46d6-a12e-df2beab37b0c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:19:27 compute-0 nova_compute[117413]: 2025-10-08 16:19:27.472 2 DEBUG nova.virt.libvirt.driver [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=72704,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmplwkpvapi',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='a7fde225-dfe9-46d6-a12e-df2beab37b0c',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11737
Oct 08 16:19:27 compute-0 nova_compute[117413]: 2025-10-08 16:19:27.473 2 DEBUG nova.virt.libvirt.driver [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Creating instance directory: /var/lib/nova/instances/a7fde225-dfe9-46d6-a12e-df2beab37b0c pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11750
Oct 08 16:19:27 compute-0 nova_compute[117413]: 2025-10-08 16:19:27.473 2 DEBUG nova.virt.libvirt.driver [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Creating disk.info with the contents: {'/var/lib/nova/instances/a7fde225-dfe9-46d6-a12e-df2beab37b0c/disk': 'qcow2', '/var/lib/nova/instances/a7fde225-dfe9-46d6-a12e-df2beab37b0c/disk.config': 'raw'} pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11764
Oct 08 16:19:27 compute-0 nova_compute[117413]: 2025-10-08 16:19:27.474 2 DEBUG nova.virt.libvirt.driver [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11774
Oct 08 16:19:27 compute-0 nova_compute[117413]: 2025-10-08 16:19:27.474 2 DEBUG nova.objects.instance [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a7fde225-dfe9-46d6-a12e-df2beab37b0c obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:19:27 compute-0 nova_compute[117413]: 2025-10-08 16:19:27.980 2 DEBUG oslo_utils.imageutils.format_inspector [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:19:27 compute-0 nova_compute[117413]: 2025-10-08 16:19:27.983 2 DEBUG oslo_utils.imageutils.format_inspector [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:19:27 compute-0 nova_compute[117413]: 2025-10-08 16:19:27.985 2 DEBUG oslo_concurrency.processutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.082 2 DEBUG oslo_concurrency.processutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.083 2 DEBUG oslo_concurrency.lockutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.083 2 DEBUG oslo_concurrency.lockutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.084 2 DEBUG oslo_utils.imageutils.format_inspector [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.087 2 DEBUG oslo_utils.imageutils.format_inspector [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.087 2 DEBUG oslo_concurrency.processutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.157 2 DEBUG oslo_concurrency.processutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.158 2 DEBUG oslo_concurrency.processutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/a7fde225-dfe9-46d6-a12e-df2beab37b0c/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.193 2 DEBUG oslo_concurrency.processutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/a7fde225-dfe9-46d6-a12e-df2beab37b0c/disk 1073741824" returned: 0 in 0.036s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.194 2 DEBUG oslo_concurrency.lockutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.111s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.195 2 DEBUG oslo_concurrency.processutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.284 2 DEBUG oslo_concurrency.processutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.286 2 DEBUG nova.virt.disk.api [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Checking if we can resize image /var/lib/nova/instances/a7fde225-dfe9-46d6-a12e-df2beab37b0c/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.286 2 DEBUG oslo_concurrency.processutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a7fde225-dfe9-46d6-a12e-df2beab37b0c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.355 2 DEBUG oslo_concurrency.processutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a7fde225-dfe9-46d6-a12e-df2beab37b0c/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.358 2 DEBUG nova.virt.disk.api [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Cannot resize image /var/lib/nova/instances/a7fde225-dfe9-46d6-a12e-df2beab37b0c/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.359 2 DEBUG nova.objects.instance [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'migration_context' on Instance uuid a7fde225-dfe9-46d6-a12e-df2beab37b0c obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.406 2 DEBUG nova.compute.manager [req-027a83cf-4821-4aea-9b3b-794a1d1b8100 req-131f42a2-f48c-43aa-ac02-4513347f3237 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Received event network-vif-unplugged-6a3204f0-278d-4801-ad42-56b69d44ee1e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.407 2 DEBUG oslo_concurrency.lockutils [req-027a83cf-4821-4aea-9b3b-794a1d1b8100 req-131f42a2-f48c-43aa-ac02-4513347f3237 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "792f361b-347c-4139-b0a6-9eace69ac31d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.407 2 DEBUG oslo_concurrency.lockutils [req-027a83cf-4821-4aea-9b3b-794a1d1b8100 req-131f42a2-f48c-43aa-ac02-4513347f3237 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "792f361b-347c-4139-b0a6-9eace69ac31d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.408 2 DEBUG oslo_concurrency.lockutils [req-027a83cf-4821-4aea-9b3b-794a1d1b8100 req-131f42a2-f48c-43aa-ac02-4513347f3237 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "792f361b-347c-4139-b0a6-9eace69ac31d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.408 2 DEBUG nova.compute.manager [req-027a83cf-4821-4aea-9b3b-794a1d1b8100 req-131f42a2-f48c-43aa-ac02-4513347f3237 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] No waiting events found dispatching network-vif-unplugged-6a3204f0-278d-4801-ad42-56b69d44ee1e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.408 2 WARNING nova.compute.manager [req-027a83cf-4821-4aea-9b3b-794a1d1b8100 req-131f42a2-f48c-43aa-ac02-4513347f3237 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Received unexpected event network-vif-unplugged-6a3204f0-278d-4801-ad42-56b69d44ee1e for instance with vm_state active and task_state resize_migrating.
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.868 2 DEBUG nova.objects.base [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Object Instance<a7fde225-dfe9-46d6-a12e-df2beab37b0c> lazy-loaded attributes: trusted_certs,migration_context wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.869 2 DEBUG oslo_concurrency.processutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/a7fde225-dfe9-46d6-a12e-df2beab37b0c/disk.config 497664 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.911 2 DEBUG oslo_concurrency.processutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/a7fde225-dfe9-46d6-a12e-df2beab37b0c/disk.config 497664" returned: 0 in 0.042s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.912 2 DEBUG nova.virt.libvirt.driver [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11704
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.915 2 DEBUG nova.virt.libvirt.vif [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-08T16:17:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1864209771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1864209771',id=6,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:17:59Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36f986860cbf4338bf6afd8aa7b4d147',ramdisk_id='',reservation_id='r-1xu3zrnu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteActionsViaActuator-898376163',owner_user_name='tempest-TestExecuteActionsViaActuator-898376163-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:18:00Z,user_data=None,user_id='723962be4e3d48efb441d80077ac4263',uuid=a7fde225-dfe9-46d6-a12e-df2beab37b0c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "96506174-a98c-4a5a-ae80-848833d70dbb", "address": "fa:16:3e:e8:2b:94", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap96506174-a9", "ovs_interfaceid": "96506174-a98c-4a5a-ae80-848833d70dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.915 2 DEBUG nova.network.os_vif_util [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converting VIF {"id": "96506174-a98c-4a5a-ae80-848833d70dbb", "address": "fa:16:3e:e8:2b:94", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap96506174-a9", "ovs_interfaceid": "96506174-a98c-4a5a-ae80-848833d70dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.916 2 DEBUG nova.network.os_vif_util [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=96506174-a98c-4a5a-ae80-848833d70dbb,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96506174-a9') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.917 2 DEBUG os_vif [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=96506174-a98c-4a5a-ae80-848833d70dbb,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96506174-a9') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.918 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.919 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.921 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '6ce416e4-8fc9-594c-baaf-8bea9af238e8', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.930 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96506174-a9, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.930 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap96506174-a9, col_values=(('qos', UUID('70bb9b39-eae5-4072-b5e0-348124d90fd8')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.931 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap96506174-a9, col_values=(('external_ids', {'iface-id': '96506174-a98c-4a5a-ae80-848833d70dbb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:2b:94', 'vm-uuid': 'a7fde225-dfe9-46d6-a12e-df2beab37b0c'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:28 compute-0 NetworkManager[1034]: <info>  [1759940368.9341] manager: (tap96506174-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.945 2 INFO os_vif [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=96506174-a98c-4a5a-ae80-848833d70dbb,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96506174-a9')
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.946 2 DEBUG nova.virt.libvirt.driver [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11851
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.946 2 DEBUG nova.compute.manager [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=72704,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmplwkpvapi',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='a7fde225-dfe9-46d6-a12e-df2beab37b0c',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9377
Oct 08 16:19:28 compute-0 nova_compute[117413]: 2025-10-08 16:19:28.947 2 WARNING neutronclient.v2_0.client [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:19:29 compute-0 sshd-session[143362]: Accepted publickey for nova from 192.168.122.101 port 52298 ssh2: ECDSA SHA256:I7ik7oXGa4FFlhvEKs86SSdzFx+FoJO9gKKWX5Y1pi4
Oct 08 16:19:29 compute-0 systemd-logind[847]: New session 16 of user nova.
Oct 08 16:19:29 compute-0 systemd[1]: Started Session 16 of User nova.
Oct 08 16:19:29 compute-0 sshd-session[143362]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 08 16:19:29 compute-0 podman[143364]: 2025-10-08 16:19:29.427296125 +0000 UTC m=+0.085405294 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=watcher_latest, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:19:29 compute-0 podman[127881]: time="2025-10-08T16:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:19:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:19:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3480 "" "Go-http-client/1.1"
Oct 08 16:19:29 compute-0 sshd-session[143371]: Received disconnect from 192.168.122.101 port 52298:11: disconnected by user
Oct 08 16:19:29 compute-0 sshd-session[143371]: Disconnected from user nova 192.168.122.101 port 52298
Oct 08 16:19:29 compute-0 sshd-session[143362]: pam_unix(sshd:session): session closed for user nova
Oct 08 16:19:29 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Oct 08 16:19:29 compute-0 systemd-logind[847]: Session 16 logged out. Waiting for processes to exit.
Oct 08 16:19:29 compute-0 systemd-logind[847]: Removed session 16.
Oct 08 16:19:29 compute-0 sshd-session[143390]: Accepted publickey for nova from 192.168.122.101 port 52308 ssh2: ECDSA SHA256:I7ik7oXGa4FFlhvEKs86SSdzFx+FoJO9gKKWX5Y1pi4
Oct 08 16:19:29 compute-0 systemd-logind[847]: New session 17 of user nova.
Oct 08 16:19:29 compute-0 systemd[1]: Started Session 17 of User nova.
Oct 08 16:19:29 compute-0 sshd-session[143390]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 08 16:19:29 compute-0 nova_compute[117413]: 2025-10-08 16:19:29.996 2 WARNING neutronclient.v2_0.client [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:19:30 compute-0 sshd-session[143393]: Received disconnect from 192.168.122.101 port 52308:11: disconnected by user
Oct 08 16:19:30 compute-0 sshd-session[143393]: Disconnected from user nova 192.168.122.101 port 52308
Oct 08 16:19:30 compute-0 sshd-session[143390]: pam_unix(sshd:session): session closed for user nova
Oct 08 16:19:30 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Oct 08 16:19:30 compute-0 systemd-logind[847]: Session 17 logged out. Waiting for processes to exit.
Oct 08 16:19:30 compute-0 systemd-logind[847]: Removed session 17.
Oct 08 16:19:30 compute-0 sshd-session[143395]: Accepted publickey for nova from 192.168.122.101 port 52314 ssh2: ECDSA SHA256:I7ik7oXGa4FFlhvEKs86SSdzFx+FoJO9gKKWX5Y1pi4
Oct 08 16:19:30 compute-0 systemd-logind[847]: New session 18 of user nova.
Oct 08 16:19:30 compute-0 systemd[1]: Started Session 18 of User nova.
Oct 08 16:19:30 compute-0 sshd-session[143395]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 08 16:19:30 compute-0 sshd-session[143398]: Received disconnect from 192.168.122.101 port 52314:11: disconnected by user
Oct 08 16:19:30 compute-0 sshd-session[143398]: Disconnected from user nova 192.168.122.101 port 52314
Oct 08 16:19:30 compute-0 sshd-session[143395]: pam_unix(sshd:session): session closed for user nova
Oct 08 16:19:30 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Oct 08 16:19:30 compute-0 systemd-logind[847]: Session 18 logged out. Waiting for processes to exit.
Oct 08 16:19:30 compute-0 systemd-logind[847]: Removed session 18.
Oct 08 16:19:30 compute-0 nova_compute[117413]: 2025-10-08 16:19:30.481 2 DEBUG nova.compute.manager [req-59a70b9e-d2b1-44b8-a0d9-27fafd409582 req-775cd438-c751-4697-b1c6-4dc3c3349532 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Received event network-vif-unplugged-6a3204f0-278d-4801-ad42-56b69d44ee1e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:19:30 compute-0 nova_compute[117413]: 2025-10-08 16:19:30.482 2 DEBUG oslo_concurrency.lockutils [req-59a70b9e-d2b1-44b8-a0d9-27fafd409582 req-775cd438-c751-4697-b1c6-4dc3c3349532 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "792f361b-347c-4139-b0a6-9eace69ac31d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:19:30 compute-0 nova_compute[117413]: 2025-10-08 16:19:30.483 2 DEBUG oslo_concurrency.lockutils [req-59a70b9e-d2b1-44b8-a0d9-27fafd409582 req-775cd438-c751-4697-b1c6-4dc3c3349532 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "792f361b-347c-4139-b0a6-9eace69ac31d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:19:30 compute-0 nova_compute[117413]: 2025-10-08 16:19:30.483 2 DEBUG oslo_concurrency.lockutils [req-59a70b9e-d2b1-44b8-a0d9-27fafd409582 req-775cd438-c751-4697-b1c6-4dc3c3349532 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "792f361b-347c-4139-b0a6-9eace69ac31d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:19:30 compute-0 nova_compute[117413]: 2025-10-08 16:19:30.483 2 DEBUG nova.compute.manager [req-59a70b9e-d2b1-44b8-a0d9-27fafd409582 req-775cd438-c751-4697-b1c6-4dc3c3349532 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] No waiting events found dispatching network-vif-unplugged-6a3204f0-278d-4801-ad42-56b69d44ee1e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:19:30 compute-0 nova_compute[117413]: 2025-10-08 16:19:30.484 2 WARNING nova.compute.manager [req-59a70b9e-d2b1-44b8-a0d9-27fafd409582 req-775cd438-c751-4697-b1c6-4dc3c3349532 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Received unexpected event network-vif-unplugged-6a3204f0-278d-4801-ad42-56b69d44ee1e for instance with vm_state active and task_state resize_migrating.
Oct 08 16:19:30 compute-0 nova_compute[117413]: 2025-10-08 16:19:30.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:31 compute-0 openstack_network_exporter[130039]: ERROR   16:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:19:31 compute-0 openstack_network_exporter[130039]: ERROR   16:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:19:31 compute-0 openstack_network_exporter[130039]: ERROR   16:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:19:31 compute-0 openstack_network_exporter[130039]: ERROR   16:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:19:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:19:31 compute-0 openstack_network_exporter[130039]: ERROR   16:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:19:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:19:31 compute-0 podman[143400]: 2025-10-08 16:19:31.521079819 +0000 UTC m=+0.110465558 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, name=ubi9-minimal, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 08 16:19:32 compute-0 nova_compute[117413]: 2025-10-08 16:19:32.063 2 DEBUG nova.network.neutron [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Port 96506174-a98c-4a5a-ae80-848833d70dbb updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.12/site-packages/nova/network/neutron.py:356
Oct 08 16:19:32 compute-0 nova_compute[117413]: 2025-10-08 16:19:32.079 2 DEBUG nova.compute.manager [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=72704,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmplwkpvapi',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='a7fde225-dfe9-46d6-a12e-df2beab37b0c',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9443
Oct 08 16:19:32 compute-0 nova_compute[117413]: 2025-10-08 16:19:32.978 2 WARNING neutronclient.v2_0.client [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:19:33 compute-0 nova_compute[117413]: 2025-10-08 16:19:33.110 2 INFO nova.network.neutron [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Updating port 6a3204f0-278d-4801-ad42-56b69d44ee1e with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 08 16:19:33 compute-0 nova_compute[117413]: 2025-10-08 16:19:33.750 2 DEBUG oslo_concurrency.lockutils [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-792f361b-347c-4139-b0a6-9eace69ac31d" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:19:33 compute-0 nova_compute[117413]: 2025-10-08 16:19:33.750 2 DEBUG oslo_concurrency.lockutils [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-792f361b-347c-4139-b0a6-9eace69ac31d" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:19:33 compute-0 nova_compute[117413]: 2025-10-08 16:19:33.751 2 DEBUG nova.network.neutron [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:19:33 compute-0 nova_compute[117413]: 2025-10-08 16:19:33.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:34 compute-0 nova_compute[117413]: 2025-10-08 16:19:34.256 2 WARNING neutronclient.v2_0.client [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:19:34 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 08 16:19:34 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 08 16:19:34 compute-0 nova_compute[117413]: 2025-10-08 16:19:34.612 2 DEBUG nova.compute.manager [req-771e3a13-c5e8-4206-8591-7c91bbe1f984 req-9f0bee5b-987e-42dc-be7f-10587f324335 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Received event network-changed-6a3204f0-278d-4801-ad42-56b69d44ee1e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:19:34 compute-0 nova_compute[117413]: 2025-10-08 16:19:34.613 2 DEBUG nova.compute.manager [req-771e3a13-c5e8-4206-8591-7c91bbe1f984 req-9f0bee5b-987e-42dc-be7f-10587f324335 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Refreshing instance network info cache due to event network-changed-6a3204f0-278d-4801-ad42-56b69d44ee1e. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 08 16:19:34 compute-0 nova_compute[117413]: 2025-10-08 16:19:34.613 2 DEBUG oslo_concurrency.lockutils [req-771e3a13-c5e8-4206-8591-7c91bbe1f984 req-9f0bee5b-987e-42dc-be7f-10587f324335 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-792f361b-347c-4139-b0a6-9eace69ac31d" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:19:34 compute-0 NetworkManager[1034]: <info>  [1759940374.6264] manager: (tap96506174-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Oct 08 16:19:34 compute-0 kernel: tap96506174-a9: entered promiscuous mode
Oct 08 16:19:34 compute-0 systemd-udevd[143451]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:19:34 compute-0 ovn_controller[19768]: 2025-10-08T16:19:34Z|00062|binding|INFO|Claiming lport 96506174-a98c-4a5a-ae80-848833d70dbb for this additional chassis.
Oct 08 16:19:34 compute-0 ovn_controller[19768]: 2025-10-08T16:19:34Z|00063|binding|INFO|96506174-a98c-4a5a-ae80-848833d70dbb: Claiming fa:16:3e:e8:2b:94 10.100.0.14
Oct 08 16:19:34 compute-0 nova_compute[117413]: 2025-10-08 16:19:34.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:34.684 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:2b:94 10.100.0.14'], port_security=['fa:16:3e:e8:2b:94 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a7fde225-dfe9-46d6-a12e-df2beab37b0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36f986860cbf4338bf6afd8aa7b4d147', 'neutron:revision_number': '10', 'neutron:security_group_ids': '215a932b-a88a-4280-bc86-df394b56782c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c20bcb7-facc-40e4-a92a-7c3dfec236b2, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[], logical_port=96506174-a98c-4a5a-ae80-848833d70dbb) old=Port_Binding(additional_chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:19:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:34.685 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 96506174-a98c-4a5a-ae80-848833d70dbb in datapath cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b unbound from our chassis
Oct 08 16:19:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:34.688 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b
Oct 08 16:19:34 compute-0 ovn_controller[19768]: 2025-10-08T16:19:34Z|00064|binding|INFO|Setting lport 96506174-a98c-4a5a-ae80-848833d70dbb ovn-installed in OVS
Oct 08 16:19:34 compute-0 nova_compute[117413]: 2025-10-08 16:19:34.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:34 compute-0 NetworkManager[1034]: <info>  [1759940374.7001] device (tap96506174-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:19:34 compute-0 NetworkManager[1034]: <info>  [1759940374.7020] device (tap96506174-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:19:34 compute-0 systemd-machined[77548]: New machine qemu-5-instance-00000006.
Oct 08 16:19:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:34.714 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[c91f307a-5d6a-484b-86cd-283104a70b68]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:34 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000006.
Oct 08 16:19:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:34.757 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[54f0e498-e7c1-4b84-a848-c4ed4010a122]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:34.760 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[9ebe425e-c617-44df-a49f-9b7b4668194e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:34.805 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[07f6b201-6378-4215-82bd-880b65c4ec72]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:34.829 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec0b84e-d9fa-4ae5-90f4-3902baf6e76c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfb6ba7b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0b:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 150997, 'reachable_time': 30186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 143481, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:34.855 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[11dffe58-445b-4d0d-8c03-ff928086a8ad]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcfb6ba7b-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 151010, 'tstamp': 151010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143483, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcfb6ba7b-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 151014, 'tstamp': 151014}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143483, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:34.856 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfb6ba7b-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:34 compute-0 nova_compute[117413]: 2025-10-08 16:19:34.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:34 compute-0 nova_compute[117413]: 2025-10-08 16:19:34.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:34.859 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfb6ba7b-50, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:34.860 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:19:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:34.860 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfb6ba7b-50, col_values=(('external_ids', {'iface-id': 'bc02923c-7f95-45ae-9ad1-1ed85859f940'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:34.861 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:19:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:34.862 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[c518400b-79fe-4b26-be6e-ba3ae735c260]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:35 compute-0 nova_compute[117413]: 2025-10-08 16:19:35.181 2 WARNING neutronclient.v2_0.client [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:19:35 compute-0 nova_compute[117413]: 2025-10-08 16:19:35.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:35 compute-0 nova_compute[117413]: 2025-10-08 16:19:35.874 2 DEBUG nova.network.neutron [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Updating instance_info_cache with network_info: [{"id": "6a3204f0-278d-4801-ad42-56b69d44ee1e", "address": "fa:16:3e:6f:e4:b5", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a3204f0-27", "ovs_interfaceid": "6a3204f0-278d-4801-ad42-56b69d44ee1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:19:36 compute-0 nova_compute[117413]: 2025-10-08 16:19:36.404 2 DEBUG oslo_concurrency.lockutils [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-792f361b-347c-4139-b0a6-9eace69ac31d" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:19:36 compute-0 nova_compute[117413]: 2025-10-08 16:19:36.411 2 DEBUG oslo_concurrency.lockutils [req-771e3a13-c5e8-4206-8591-7c91bbe1f984 req-9f0bee5b-987e-42dc-be7f-10587f324335 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-792f361b-347c-4139-b0a6-9eace69ac31d" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:19:36 compute-0 nova_compute[117413]: 2025-10-08 16:19:36.412 2 DEBUG nova.network.neutron [req-771e3a13-c5e8-4206-8591-7c91bbe1f984 req-9f0bee5b-987e-42dc-be7f-10587f324335 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Refreshing network info cache for port 6a3204f0-278d-4801-ad42-56b69d44ee1e _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 08 16:19:36 compute-0 nova_compute[117413]: 2025-10-08 16:19:36.968 2 WARNING neutronclient.v2_0.client [req-771e3a13-c5e8-4206-8591-7c91bbe1f984 req-9f0bee5b-987e-42dc-be7f-10587f324335 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:19:37 compute-0 nova_compute[117413]: 2025-10-08 16:19:37.227 2 DEBUG nova.virt.libvirt.driver [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Starting finish_migration finish_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12604
Oct 08 16:19:37 compute-0 nova_compute[117413]: 2025-10-08 16:19:37.231 2 DEBUG nova.virt.libvirt.driver [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Instance directory exists: not creating _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5134
Oct 08 16:19:37 compute-0 nova_compute[117413]: 2025-10-08 16:19:37.231 2 INFO nova.virt.libvirt.driver [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Creating image(s)
Oct 08 16:19:37 compute-0 nova_compute[117413]: 2025-10-08 16:19:37.232 2 DEBUG nova.objects.instance [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 792f361b-347c-4139-b0a6-9eace69ac31d obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:19:37 compute-0 nova_compute[117413]: 2025-10-08 16:19:37.590 2 WARNING neutronclient.v2_0.client [req-771e3a13-c5e8-4206-8591-7c91bbe1f984 req-9f0bee5b-987e-42dc-be7f-10587f324335 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:19:37 compute-0 nova_compute[117413]: 2025-10-08 16:19:37.764 2 DEBUG oslo_concurrency.processutils [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:19:37 compute-0 nova_compute[117413]: 2025-10-08 16:19:37.858 2 DEBUG oslo_concurrency.processutils [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:19:37 compute-0 nova_compute[117413]: 2025-10-08 16:19:37.859 2 DEBUG nova.virt.disk.api [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Checking if we can resize image /var/lib/nova/instances/792f361b-347c-4139-b0a6-9eace69ac31d/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:19:37 compute-0 nova_compute[117413]: 2025-10-08 16:19:37.859 2 DEBUG oslo_concurrency.processutils [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/792f361b-347c-4139-b0a6-9eace69ac31d/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:19:37 compute-0 nova_compute[117413]: 2025-10-08 16:19:37.918 2 DEBUG nova.network.neutron [req-771e3a13-c5e8-4206-8591-7c91bbe1f984 req-9f0bee5b-987e-42dc-be7f-10587f324335 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Updated VIF entry in instance network info cache for port 6a3204f0-278d-4801-ad42-56b69d44ee1e. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Oct 08 16:19:37 compute-0 nova_compute[117413]: 2025-10-08 16:19:37.919 2 DEBUG nova.network.neutron [req-771e3a13-c5e8-4206-8591-7c91bbe1f984 req-9f0bee5b-987e-42dc-be7f-10587f324335 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Updating instance_info_cache with network_info: [{"id": "6a3204f0-278d-4801-ad42-56b69d44ee1e", "address": "fa:16:3e:6f:e4:b5", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a3204f0-27", "ovs_interfaceid": "6a3204f0-278d-4801-ad42-56b69d44ee1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:19:37 compute-0 nova_compute[117413]: 2025-10-08 16:19:37.927 2 DEBUG oslo_concurrency.processutils [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/792f361b-347c-4139-b0a6-9eace69ac31d/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:19:37 compute-0 nova_compute[117413]: 2025-10-08 16:19:37.927 2 DEBUG nova.virt.disk.api [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Cannot resize image /var/lib/nova/instances/792f361b-347c-4139-b0a6-9eace69ac31d/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:19:38 compute-0 podman[143511]: 2025-10-08 16:19:38.509091435 +0000 UTC m=+0.096593912 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=iscsid)
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.533 2 DEBUG oslo_concurrency.lockutils [req-771e3a13-c5e8-4206-8591-7c91bbe1f984 req-9f0bee5b-987e-42dc-be7f-10587f324335 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-792f361b-347c-4139-b0a6-9eace69ac31d" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.572 2 DEBUG nova.virt.libvirt.driver [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Did not create local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5272
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.573 2 DEBUG nova.virt.libvirt.driver [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Ensure instance console log exists: /var/lib/nova/instances/792f361b-347c-4139-b0a6-9eace69ac31d/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.574 2 DEBUG oslo_concurrency.lockutils [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.574 2 DEBUG oslo_concurrency.lockutils [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.575 2 DEBUG oslo_concurrency.lockutils [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.580 2 DEBUG nova.virt.libvirt.driver [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Start _get_guest_xml network_info=[{"id": "6a3204f0-278d-4801-ad42-56b69d44ee1e", "address": "fa:16:3e:6f:e4:b5", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "vif_mac": "fa:16:3e:6f:e4:b5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a3204f0-27", "ovs_interfaceid": "6a3204f0-278d-4801-ad42-56b69d44ee1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'boot_index': 0, 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': '44390e9d-4b05-4916-9ba9-97b19c79ef43'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.588 2 WARNING nova.virt.libvirt.driver [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.590 2 DEBUG nova.virt.driver [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='44390e9d-4b05-4916-9ba9-97b19c79ef43', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteActionsViaActuator-server-556686919', uuid='792f361b-347c-4139-b0a6-9eace69ac31d'), owner=OwnerMeta(userid='723962be4e3d48efb441d80077ac4263', username='tempest-TestExecuteActionsViaActuator-898376163-project-admin', projectid='36f986860cbf4338bf6afd8aa7b4d147', projectname='tempest-TestExecuteActionsViaActuator-898376163'), image=ImageMeta(id='44390e9d-4b05-4916-9ba9-97b19c79ef43', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_cdrom_bus': 'sata', 'hw_disk_bus': 'virtio', 'hw_input_bus': 'usb', 'hw_machine_type': 'q35', 'hw_pointer_model': 'usbtablet', 'hw_rng_model': 'virtio', 'hw_video_model': 'virtio', 'hw_vif_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='43cd5d45-bd07-4889-a671-dd23291090c1', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "6a3204f0-278d-4801-ad42-56b69d44ee1e", "address": "fa:16:3e:6f:e4:b5", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "vif_mac": "fa:16:3e:6f:e4:b5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a3204f0-27", "ovs_interfaceid": "6a3204f0-278d-4801-ad42-56b69d44ee1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008114656.23cad1d.el10', creation_time=1759940378.590702) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.595 2 DEBUG nova.virt.libvirt.host [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.596 2 DEBUG nova.virt.libvirt.host [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.600 2 DEBUG nova.virt.libvirt.host [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.601 2 DEBUG nova.virt.libvirt.host [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.602 2 DEBUG nova.virt.libvirt.driver [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.602 2 DEBUG nova.virt.hardware [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-08T16:08:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='43cd5d45-bd07-4889-a671-dd23291090c1',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.603 2 DEBUG nova.virt.hardware [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.603 2 DEBUG nova.virt.hardware [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.603 2 DEBUG nova.virt.hardware [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.604 2 DEBUG nova.virt.hardware [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.604 2 DEBUG nova.virt.hardware [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.604 2 DEBUG nova.virt.hardware [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.605 2 DEBUG nova.virt.hardware [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.605 2 DEBUG nova.virt.hardware [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.605 2 DEBUG nova.virt.hardware [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.606 2 DEBUG nova.virt.hardware [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.606 2 DEBUG nova.objects.instance [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 792f361b-347c-4139-b0a6-9eace69ac31d obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:19:38 compute-0 ovn_controller[19768]: 2025-10-08T16:19:38Z|00065|binding|INFO|Claiming lport 96506174-a98c-4a5a-ae80-848833d70dbb for this chassis.
Oct 08 16:19:38 compute-0 ovn_controller[19768]: 2025-10-08T16:19:38Z|00066|binding|INFO|96506174-a98c-4a5a-ae80-848833d70dbb: Claiming fa:16:3e:e8:2b:94 10.100.0.14
Oct 08 16:19:38 compute-0 ovn_controller[19768]: 2025-10-08T16:19:38Z|00067|binding|INFO|Setting lport 96506174-a98c-4a5a-ae80-848833d70dbb up in Southbound
Oct 08 16:19:38 compute-0 nova_compute[117413]: 2025-10-08 16:19:38.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.129 2 DEBUG nova.objects.base [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Object Instance<792f361b-347c-4139-b0a6-9eace69ac31d> lazy-loaded attributes: trusted_certs,vcpu_model wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.135 2 DEBUG oslo_concurrency.processutils [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/792f361b-347c-4139-b0a6-9eace69ac31d/disk.config --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.204 2 DEBUG oslo_concurrency.processutils [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/792f361b-347c-4139-b0a6-9eace69ac31d/disk.config --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.205 2 DEBUG oslo_concurrency.lockutils [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "/var/lib/nova/instances/792f361b-347c-4139-b0a6-9eace69ac31d/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.205 2 DEBUG oslo_concurrency.lockutils [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "/var/lib/nova/instances/792f361b-347c-4139-b0a6-9eace69ac31d/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.206 2 DEBUG oslo_concurrency.lockutils [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "/var/lib/nova/instances/792f361b-347c-4139-b0a6-9eace69ac31d/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.208 2 DEBUG nova.virt.libvirt.vif [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-08T16:18:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-556686919',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-556686919',id=8,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:18:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36f986860cbf4338bf6afd8aa7b4d147',ramdisk_id='',reservation_id='r-0b36uv99',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestExecuteActionsViaActuator-898376163',owner_user_name='tempest-TestExecuteActionsViaActuator-898376163-project-admin'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:19:31Z,user_data=None,user_id='723962be4e3d48efb441d80077ac4263',uuid=792f361b-347c-4139-b0a6-9eace69ac31d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6a3204f0-278d-4801-ad42-56b69d44ee1e", "address": "fa:16:3e:6f:e4:b5", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "vif_mac": "fa:16:3e:6f:e4:b5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a3204f0-27", "ovs_interfaceid": "6a3204f0-278d-4801-ad42-56b69d44ee1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.208 2 DEBUG nova.network.os_vif_util [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converting VIF {"id": "6a3204f0-278d-4801-ad42-56b69d44ee1e", "address": "fa:16:3e:6f:e4:b5", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "vif_mac": "fa:16:3e:6f:e4:b5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a3204f0-27", "ovs_interfaceid": "6a3204f0-278d-4801-ad42-56b69d44ee1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.209 2 DEBUG nova.network.os_vif_util [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:e4:b5,bridge_name='br-int',has_traffic_filtering=True,id=6a3204f0-278d-4801-ad42-56b69d44ee1e,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a3204f0-27') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.212 2 DEBUG nova.virt.libvirt.driver [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] End _get_guest_xml xml=<domain type="kvm">
Oct 08 16:19:39 compute-0 nova_compute[117413]:   <uuid>792f361b-347c-4139-b0a6-9eace69ac31d</uuid>
Oct 08 16:19:39 compute-0 nova_compute[117413]:   <name>instance-00000008</name>
Oct 08 16:19:39 compute-0 nova_compute[117413]:   <memory>131072</memory>
Oct 08 16:19:39 compute-0 nova_compute[117413]:   <vcpu>1</vcpu>
Oct 08 16:19:39 compute-0 nova_compute[117413]:   <metadata>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <nova:package version="32.1.0-0.20251008114656.23cad1d.el10"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <nova:name>tempest-TestExecuteActionsViaActuator-server-556686919</nova:name>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <nova:creationTime>2025-10-08 16:19:38</nova:creationTime>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <nova:flavor name="m1.nano" id="43cd5d45-bd07-4889-a671-dd23291090c1">
Oct 08 16:19:39 compute-0 nova_compute[117413]:         <nova:memory>128</nova:memory>
Oct 08 16:19:39 compute-0 nova_compute[117413]:         <nova:disk>1</nova:disk>
Oct 08 16:19:39 compute-0 nova_compute[117413]:         <nova:swap>0</nova:swap>
Oct 08 16:19:39 compute-0 nova_compute[117413]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 16:19:39 compute-0 nova_compute[117413]:         <nova:vcpus>1</nova:vcpus>
Oct 08 16:19:39 compute-0 nova_compute[117413]:         <nova:extraSpecs>
Oct 08 16:19:39 compute-0 nova_compute[117413]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 08 16:19:39 compute-0 nova_compute[117413]:         </nova:extraSpecs>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       </nova:flavor>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <nova:image uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43">
Oct 08 16:19:39 compute-0 nova_compute[117413]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 08 16:19:39 compute-0 nova_compute[117413]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 08 16:19:39 compute-0 nova_compute[117413]:         <nova:minDisk>1</nova:minDisk>
Oct 08 16:19:39 compute-0 nova_compute[117413]:         <nova:minRam>0</nova:minRam>
Oct 08 16:19:39 compute-0 nova_compute[117413]:         <nova:properties>
Oct 08 16:19:39 compute-0 nova_compute[117413]:           <nova:property name="hw_cdrom_bus">sata</nova:property>
Oct 08 16:19:39 compute-0 nova_compute[117413]:           <nova:property name="hw_disk_bus">virtio</nova:property>
Oct 08 16:19:39 compute-0 nova_compute[117413]:           <nova:property name="hw_input_bus">usb</nova:property>
Oct 08 16:19:39 compute-0 nova_compute[117413]:           <nova:property name="hw_machine_type">q35</nova:property>
Oct 08 16:19:39 compute-0 nova_compute[117413]:           <nova:property name="hw_pointer_model">usbtablet</nova:property>
Oct 08 16:19:39 compute-0 nova_compute[117413]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 08 16:19:39 compute-0 nova_compute[117413]:           <nova:property name="hw_video_model">virtio</nova:property>
Oct 08 16:19:39 compute-0 nova_compute[117413]:           <nova:property name="hw_vif_model">virtio</nova:property>
Oct 08 16:19:39 compute-0 nova_compute[117413]:         </nova:properties>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       </nova:image>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <nova:owner>
Oct 08 16:19:39 compute-0 nova_compute[117413]:         <nova:user uuid="723962be4e3d48efb441d80077ac4263">tempest-TestExecuteActionsViaActuator-898376163-project-admin</nova:user>
Oct 08 16:19:39 compute-0 nova_compute[117413]:         <nova:project uuid="36f986860cbf4338bf6afd8aa7b4d147">tempest-TestExecuteActionsViaActuator-898376163</nova:project>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       </nova:owner>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <nova:root type="image" uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <nova:ports>
Oct 08 16:19:39 compute-0 nova_compute[117413]:         <nova:port uuid="6a3204f0-278d-4801-ad42-56b69d44ee1e">
Oct 08 16:19:39 compute-0 nova_compute[117413]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:         </nova:port>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       </nova:ports>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     </nova:instance>
Oct 08 16:19:39 compute-0 nova_compute[117413]:   </metadata>
Oct 08 16:19:39 compute-0 nova_compute[117413]:   <sysinfo type="smbios">
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <system>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <entry name="manufacturer">RDO</entry>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <entry name="product">OpenStack Compute</entry>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <entry name="version">32.1.0-0.20251008114656.23cad1d.el10</entry>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <entry name="serial">792f361b-347c-4139-b0a6-9eace69ac31d</entry>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <entry name="uuid">792f361b-347c-4139-b0a6-9eace69ac31d</entry>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <entry name="family">Virtual Machine</entry>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     </system>
Oct 08 16:19:39 compute-0 nova_compute[117413]:   </sysinfo>
Oct 08 16:19:39 compute-0 nova_compute[117413]:   <os>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <boot dev="hd"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <smbios mode="sysinfo"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:   </os>
Oct 08 16:19:39 compute-0 nova_compute[117413]:   <features>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <acpi/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <apic/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <vmcoreinfo/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:   </features>
Oct 08 16:19:39 compute-0 nova_compute[117413]:   <clock offset="utc">
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <timer name="hpet" present="no"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:   </clock>
Oct 08 16:19:39 compute-0 nova_compute[117413]:   <cpu mode="host-model" match="exact">
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <topology sockets="1" cores="1" threads="1"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:19:39 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <disk type="file" device="disk">
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/792f361b-347c-4139-b0a6-9eace69ac31d/disk"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <target dev="vda" bus="virtio"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <disk type="file" device="cdrom">
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <driver name="qemu" type="raw" cache="none"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/792f361b-347c-4139-b0a6-9eace69ac31d/disk.config"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <target dev="sda" bus="sata"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <interface type="ethernet">
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <mac address="fa:16:3e:6f:e4:b5"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <driver name="vhost" rx_queue_size="512"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <mtu size="1442"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <target dev="tap6a3204f0-27"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     </interface>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <serial type="pty">
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/792f361b-347c-4139-b0a6-9eace69ac31d/console.log" append="off"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     </serial>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <video>
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     </video>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <input type="tablet" bus="usb"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <rng model="virtio">
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <backend model="random">/dev/urandom</backend>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <controller type="usb" index="0"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 08 16:19:39 compute-0 nova_compute[117413]:       <stats period="10"/>
Oct 08 16:19:39 compute-0 nova_compute[117413]:     </memballoon>
Oct 08 16:19:39 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:19:39 compute-0 nova_compute[117413]: </domain>
Oct 08 16:19:39 compute-0 nova_compute[117413]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.214 2 DEBUG nova.virt.libvirt.vif [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-08T16:18:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-556686919',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-556686919',id=8,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:18:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='36f986860cbf4338bf6afd8aa7b4d147',ramdisk_id='',reservation_id='r-0b36uv99',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestExecuteActionsViaActuator-898376163',owner_user_name='tempest-TestExecuteActionsViaActuator-898376163-project-admin'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:19:31Z,user_data=None,user_id='723962be4e3d48efb441d80077ac4263',uuid=792f361b-347c-4139-b0a6-9eace69ac31d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6a3204f0-278d-4801-ad42-56b69d44ee1e", "address": "fa:16:3e:6f:e4:b5", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "vif_mac": "fa:16:3e:6f:e4:b5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a3204f0-27", "ovs_interfaceid": "6a3204f0-278d-4801-ad42-56b69d44ee1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.214 2 DEBUG nova.network.os_vif_util [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converting VIF {"id": "6a3204f0-278d-4801-ad42-56b69d44ee1e", "address": "fa:16:3e:6f:e4:b5", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "vif_mac": "fa:16:3e:6f:e4:b5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a3204f0-27", "ovs_interfaceid": "6a3204f0-278d-4801-ad42-56b69d44ee1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.214 2 DEBUG nova.network.os_vif_util [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:e4:b5,bridge_name='br-int',has_traffic_filtering=True,id=6a3204f0-278d-4801-ad42-56b69d44ee1e,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a3204f0-27') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.215 2 DEBUG os_vif [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:e4:b5,bridge_name='br-int',has_traffic_filtering=True,id=6a3204f0-278d-4801-ad42-56b69d44ee1e,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a3204f0-27') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.216 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.216 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.217 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'be2590e1-fd41-58b8-92b3-61d4a3e512e2', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.225 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a3204f0-27, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.225 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap6a3204f0-27, col_values=(('qos', UUID('cf17d6a6-b3d3-4d9b-97e5-ec9b6f5224af')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.226 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap6a3204f0-27, col_values=(('external_ids', {'iface-id': '6a3204f0-278d-4801-ad42-56b69d44ee1e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:e4:b5', 'vm-uuid': '792f361b-347c-4139-b0a6-9eace69ac31d'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:39 compute-0 NetworkManager[1034]: <info>  [1759940379.2295] manager: (tap6a3204f0-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.240 2 INFO os_vif [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:e4:b5,bridge_name='br-int',has_traffic_filtering=True,id=6a3204f0-278d-4801-ad42-56b69d44ee1e,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a3204f0-27')
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.915 2 INFO nova.compute.manager [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Post operation of migration started
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.916 2 WARNING neutronclient.v2_0.client [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.990 2 WARNING neutronclient.v2_0.client [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:19:39 compute-0 nova_compute[117413]: 2025-10-08 16:19:39.991 2 WARNING neutronclient.v2_0.client [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:19:40 compute-0 nova_compute[117413]: 2025-10-08 16:19:40.063 2 DEBUG oslo_concurrency.lockutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-a7fde225-dfe9-46d6-a12e-df2beab37b0c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:19:40 compute-0 nova_compute[117413]: 2025-10-08 16:19:40.063 2 DEBUG oslo_concurrency.lockutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-a7fde225-dfe9-46d6-a12e-df2beab37b0c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:19:40 compute-0 nova_compute[117413]: 2025-10-08 16:19:40.064 2 DEBUG nova.network.neutron [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:19:40 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Oct 08 16:19:40 compute-0 systemd[143319]: Activating special unit Exit the Session...
Oct 08 16:19:40 compute-0 systemd[143319]: Stopped target Main User Target.
Oct 08 16:19:40 compute-0 systemd[143319]: Stopped target Basic System.
Oct 08 16:19:40 compute-0 systemd[143319]: Stopped target Paths.
Oct 08 16:19:40 compute-0 systemd[143319]: Stopped target Sockets.
Oct 08 16:19:40 compute-0 systemd[143319]: Stopped target Timers.
Oct 08 16:19:40 compute-0 systemd[143319]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 08 16:19:40 compute-0 systemd[143319]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 08 16:19:40 compute-0 systemd[143319]: Closed D-Bus User Message Bus Socket.
Oct 08 16:19:40 compute-0 systemd[143319]: Stopped Create User's Volatile Files and Directories.
Oct 08 16:19:40 compute-0 systemd[143319]: Removed slice User Application Slice.
Oct 08 16:19:40 compute-0 systemd[143319]: Reached target Shutdown.
Oct 08 16:19:40 compute-0 systemd[143319]: Finished Exit the Session.
Oct 08 16:19:40 compute-0 systemd[143319]: Reached target Exit the Session.
Oct 08 16:19:40 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Oct 08 16:19:40 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Oct 08 16:19:40 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 08 16:19:40 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 08 16:19:40 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 08 16:19:40 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 08 16:19:40 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Oct 08 16:19:40 compute-0 nova_compute[117413]: 2025-10-08 16:19:40.572 2 WARNING neutronclient.v2_0.client [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:19:40 compute-0 nova_compute[117413]: 2025-10-08 16:19:40.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:40 compute-0 nova_compute[117413]: 2025-10-08 16:19:40.811 2 DEBUG nova.virt.libvirt.driver [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:19:40 compute-0 nova_compute[117413]: 2025-10-08 16:19:40.812 2 DEBUG nova.virt.libvirt.driver [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:19:40 compute-0 nova_compute[117413]: 2025-10-08 16:19:40.812 2 DEBUG nova.virt.libvirt.driver [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] No VIF found with MAC fa:16:3e:6f:e4:b5, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 08 16:19:40 compute-0 nova_compute[117413]: 2025-10-08 16:19:40.813 2 INFO nova.virt.libvirt.driver [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Using config drive
Oct 08 16:19:40 compute-0 kernel: tap6a3204f0-27: entered promiscuous mode
Oct 08 16:19:40 compute-0 NetworkManager[1034]: <info>  [1759940380.8856] manager: (tap6a3204f0-27): new Tun device (/org/freedesktop/NetworkManager/Devices/37)
Oct 08 16:19:40 compute-0 ovn_controller[19768]: 2025-10-08T16:19:40Z|00068|binding|INFO|Claiming lport 6a3204f0-278d-4801-ad42-56b69d44ee1e for this chassis.
Oct 08 16:19:40 compute-0 ovn_controller[19768]: 2025-10-08T16:19:40Z|00069|binding|INFO|6a3204f0-278d-4801-ad42-56b69d44ee1e: Claiming fa:16:3e:6f:e4:b5 10.100.0.6
Oct 08 16:19:40 compute-0 nova_compute[117413]: 2025-10-08 16:19:40.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:40 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:40.893 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:e4:b5 10.100.0.6'], port_security=['fa:16:3e:6f:e4:b5 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '792f361b-347c-4139-b0a6-9eace69ac31d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36f986860cbf4338bf6afd8aa7b4d147', 'neutron:revision_number': '9', 'neutron:security_group_ids': '215a932b-a88a-4280-bc86-df394b56782c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c20bcb7-facc-40e4-a92a-7c3dfec236b2, chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=6a3204f0-278d-4801-ad42-56b69d44ee1e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:19:40 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:40.894 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 6a3204f0-278d-4801-ad42-56b69d44ee1e in datapath cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b bound to our chassis
Oct 08 16:19:40 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:40.896 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b
Oct 08 16:19:40 compute-0 ovn_controller[19768]: 2025-10-08T16:19:40Z|00070|binding|INFO|Setting lport 6a3204f0-278d-4801-ad42-56b69d44ee1e ovn-installed in OVS
Oct 08 16:19:40 compute-0 ovn_controller[19768]: 2025-10-08T16:19:40Z|00071|binding|INFO|Setting lport 6a3204f0-278d-4801-ad42-56b69d44ee1e up in Southbound
Oct 08 16:19:40 compute-0 nova_compute[117413]: 2025-10-08 16:19:40.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:40 compute-0 nova_compute[117413]: 2025-10-08 16:19:40.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:40 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:40.916 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[f22073e3-0c05-4031-ad04-e2b07c1f7362]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:40 compute-0 systemd-udevd[143555]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:19:40 compute-0 systemd-machined[77548]: New machine qemu-6-instance-00000008.
Oct 08 16:19:40 compute-0 NetworkManager[1034]: <info>  [1759940380.9381] device (tap6a3204f0-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:19:40 compute-0 NetworkManager[1034]: <info>  [1759940380.9391] device (tap6a3204f0-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:19:40 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000008.
Oct 08 16:19:40 compute-0 nova_compute[117413]: 2025-10-08 16:19:40.956 2 WARNING neutronclient.v2_0.client [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:19:40 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:40.961 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[de2667ab-0ab0-4254-a63f-a735fedbb2a2]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:40 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:40.964 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[6f0fbea9-f8f0-4b45-83f3-a2c39331e893]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:41.000 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[925249e8-0c2e-46a3-bcfe-cfbfcdb4accb]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:41.020 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[0fba2473-e484-4e27-bad5-fbfc008d4a97]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfb6ba7b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0b:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 11, 'rx_bytes': 1294, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 11, 'rx_bytes': 1294, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 150997, 'reachable_time': 30186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 143567, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:41.041 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[a386e43b-78ad-4650-a177-3290651049bf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcfb6ba7b-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 151010, 'tstamp': 151010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143569, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcfb6ba7b-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 151014, 'tstamp': 151014}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143569, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:41.043 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfb6ba7b-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:41 compute-0 nova_compute[117413]: 2025-10-08 16:19:41.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:41 compute-0 nova_compute[117413]: 2025-10-08 16:19:41.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:41.046 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfb6ba7b-50, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:41.046 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:19:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:41.047 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfb6ba7b-50, col_values=(('external_ids', {'iface-id': 'bc02923c-7f95-45ae-9ad1-1ed85859f940'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:41.047 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:19:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:41.048 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[8c827d02-e002-467c-a26e-4206a9bde869]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:41 compute-0 nova_compute[117413]: 2025-10-08 16:19:41.178 2 DEBUG nova.network.neutron [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Updating instance_info_cache with network_info: [{"id": "96506174-a98c-4a5a-ae80-848833d70dbb", "address": "fa:16:3e:e8:2b:94", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96506174-a9", "ovs_interfaceid": "96506174-a98c-4a5a-ae80-848833d70dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:19:41 compute-0 nova_compute[117413]: 2025-10-08 16:19:41.627 2 DEBUG nova.compute.manager [req-3edcb2b4-2d0c-42b3-86fd-6acc50ec3492 req-b26029f1-1e48-43b5-a298-323160a4c5d5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Received event network-vif-plugged-6a3204f0-278d-4801-ad42-56b69d44ee1e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:19:41 compute-0 nova_compute[117413]: 2025-10-08 16:19:41.627 2 DEBUG oslo_concurrency.lockutils [req-3edcb2b4-2d0c-42b3-86fd-6acc50ec3492 req-b26029f1-1e48-43b5-a298-323160a4c5d5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "792f361b-347c-4139-b0a6-9eace69ac31d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:19:41 compute-0 nova_compute[117413]: 2025-10-08 16:19:41.628 2 DEBUG oslo_concurrency.lockutils [req-3edcb2b4-2d0c-42b3-86fd-6acc50ec3492 req-b26029f1-1e48-43b5-a298-323160a4c5d5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "792f361b-347c-4139-b0a6-9eace69ac31d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:19:41 compute-0 nova_compute[117413]: 2025-10-08 16:19:41.628 2 DEBUG oslo_concurrency.lockutils [req-3edcb2b4-2d0c-42b3-86fd-6acc50ec3492 req-b26029f1-1e48-43b5-a298-323160a4c5d5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "792f361b-347c-4139-b0a6-9eace69ac31d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:19:41 compute-0 nova_compute[117413]: 2025-10-08 16:19:41.628 2 DEBUG nova.compute.manager [req-3edcb2b4-2d0c-42b3-86fd-6acc50ec3492 req-b26029f1-1e48-43b5-a298-323160a4c5d5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] No waiting events found dispatching network-vif-plugged-6a3204f0-278d-4801-ad42-56b69d44ee1e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:19:41 compute-0 nova_compute[117413]: 2025-10-08 16:19:41.628 2 WARNING nova.compute.manager [req-3edcb2b4-2d0c-42b3-86fd-6acc50ec3492 req-b26029f1-1e48-43b5-a298-323160a4c5d5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Received unexpected event network-vif-plugged-6a3204f0-278d-4801-ad42-56b69d44ee1e for instance with vm_state active and task_state resize_finish.
Oct 08 16:19:41 compute-0 nova_compute[117413]: 2025-10-08 16:19:41.628 2 DEBUG nova.compute.manager [req-3edcb2b4-2d0c-42b3-86fd-6acc50ec3492 req-b26029f1-1e48-43b5-a298-323160a4c5d5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Received event network-vif-plugged-6a3204f0-278d-4801-ad42-56b69d44ee1e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:19:41 compute-0 nova_compute[117413]: 2025-10-08 16:19:41.629 2 DEBUG oslo_concurrency.lockutils [req-3edcb2b4-2d0c-42b3-86fd-6acc50ec3492 req-b26029f1-1e48-43b5-a298-323160a4c5d5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "792f361b-347c-4139-b0a6-9eace69ac31d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:19:41 compute-0 nova_compute[117413]: 2025-10-08 16:19:41.629 2 DEBUG oslo_concurrency.lockutils [req-3edcb2b4-2d0c-42b3-86fd-6acc50ec3492 req-b26029f1-1e48-43b5-a298-323160a4c5d5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "792f361b-347c-4139-b0a6-9eace69ac31d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:19:41 compute-0 nova_compute[117413]: 2025-10-08 16:19:41.629 2 DEBUG oslo_concurrency.lockutils [req-3edcb2b4-2d0c-42b3-86fd-6acc50ec3492 req-b26029f1-1e48-43b5-a298-323160a4c5d5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "792f361b-347c-4139-b0a6-9eace69ac31d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:19:41 compute-0 nova_compute[117413]: 2025-10-08 16:19:41.629 2 DEBUG nova.compute.manager [req-3edcb2b4-2d0c-42b3-86fd-6acc50ec3492 req-b26029f1-1e48-43b5-a298-323160a4c5d5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] No waiting events found dispatching network-vif-plugged-6a3204f0-278d-4801-ad42-56b69d44ee1e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:19:41 compute-0 nova_compute[117413]: 2025-10-08 16:19:41.629 2 WARNING nova.compute.manager [req-3edcb2b4-2d0c-42b3-86fd-6acc50ec3492 req-b26029f1-1e48-43b5-a298-323160a4c5d5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Received unexpected event network-vif-plugged-6a3204f0-278d-4801-ad42-56b69d44ee1e for instance with vm_state active and task_state resize_finish.
Oct 08 16:19:41 compute-0 nova_compute[117413]: 2025-10-08 16:19:41.685 2 DEBUG oslo_concurrency.lockutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-a7fde225-dfe9-46d6-a12e-df2beab37b0c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:19:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:41.890 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:19:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:41.891 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:19:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:41.892 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:19:42 compute-0 nova_compute[117413]: 2025-10-08 16:19:42.204 2 DEBUG oslo_concurrency.lockutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:19:42 compute-0 nova_compute[117413]: 2025-10-08 16:19:42.205 2 DEBUG oslo_concurrency.lockutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:19:42 compute-0 nova_compute[117413]: 2025-10-08 16:19:42.206 2 DEBUG oslo_concurrency.lockutils [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:19:42 compute-0 nova_compute[117413]: 2025-10-08 16:19:42.212 2 INFO nova.virt.libvirt.driver [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 08 16:19:42 compute-0 virtqemud[117740]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 08 16:19:42 compute-0 virtqemud[117740]: hostname: compute-0
Oct 08 16:19:42 compute-0 virtqemud[117740]: Domain id=5 name='instance-00000006' uuid=a7fde225-dfe9-46d6-a12e-df2beab37b0c is tainted: custom-monitor
Oct 08 16:19:42 compute-0 nova_compute[117413]: 2025-10-08 16:19:42.330 2 DEBUG nova.compute.manager [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 08 16:19:42 compute-0 nova_compute[117413]: 2025-10-08 16:19:42.334 2 INFO nova.virt.libvirt.driver [-] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Instance running successfully.
Oct 08 16:19:42 compute-0 virtqemud[117740]: argument unsupported: QEMU guest agent is not configured
Oct 08 16:19:42 compute-0 nova_compute[117413]: 2025-10-08 16:19:42.336 2 DEBUG nova.virt.libvirt.guest [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:200
Oct 08 16:19:42 compute-0 nova_compute[117413]: 2025-10-08 16:19:42.336 2 DEBUG nova.virt.libvirt.driver [None req-673a3bb9-27d7-47f0-9a4c-7f02416f5015 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] finish_migration finished successfully. finish_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12699
Oct 08 16:19:43 compute-0 nova_compute[117413]: 2025-10-08 16:19:43.223 2 INFO nova.virt.libvirt.driver [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 08 16:19:43 compute-0 podman[143580]: 2025-10-08 16:19:43.513116236 +0000 UTC m=+0.096424877 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 08 16:19:44 compute-0 nova_compute[117413]: 2025-10-08 16:19:44.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:44 compute-0 nova_compute[117413]: 2025-10-08 16:19:44.231 2 INFO nova.virt.libvirt.driver [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 08 16:19:44 compute-0 nova_compute[117413]: 2025-10-08 16:19:44.238 2 DEBUG nova.compute.manager [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:19:44 compute-0 nova_compute[117413]: 2025-10-08 16:19:44.759 2 DEBUG nova.objects.instance [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Oct 08 16:19:45 compute-0 nova_compute[117413]: 2025-10-08 16:19:45.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:45 compute-0 nova_compute[117413]: 2025-10-08 16:19:45.777 2 WARNING neutronclient.v2_0.client [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:19:45 compute-0 nova_compute[117413]: 2025-10-08 16:19:45.867 2 WARNING neutronclient.v2_0.client [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:19:45 compute-0 nova_compute[117413]: 2025-10-08 16:19:45.867 2 WARNING neutronclient.v2_0.client [None req-af9fd981-e426-481b-b167-ec2fa02982b8 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:19:48 compute-0 podman[143599]: 2025-10-08 16:19:48.466733241 +0000 UTC m=+0.068877593 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:19:48 compute-0 podman[143600]: 2025-10-08 16:19:48.514765219 +0000 UTC m=+0.113601177 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007)
Oct 08 16:19:49 compute-0 nova_compute[117413]: 2025-10-08 16:19:49.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:50 compute-0 nova_compute[117413]: 2025-10-08 16:19:50.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:54 compute-0 nova_compute[117413]: 2025-10-08 16:19:54.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:55 compute-0 ovn_controller[19768]: 2025-10-08T16:19:55Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6f:e4:b5 10.100.0.6
Oct 08 16:19:55 compute-0 nova_compute[117413]: 2025-10-08 16:19:55.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:55 compute-0 nova_compute[117413]: 2025-10-08 16:19:55.869 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:19:56 compute-0 nova_compute[117413]: 2025-10-08 16:19:56.385 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:19:56 compute-0 nova_compute[117413]: 2025-10-08 16:19:56.385 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:19:56 compute-0 nova_compute[117413]: 2025-10-08 16:19:56.386 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:19:56 compute-0 nova_compute[117413]: 2025-10-08 16:19:56.386 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:19:57 compute-0 nova_compute[117413]: 2025-10-08 16:19:57.452 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:19:57 compute-0 nova_compute[117413]: 2025-10-08 16:19:57.522 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:19:57 compute-0 nova_compute[117413]: 2025-10-08 16:19:57.524 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:19:57 compute-0 nova_compute[117413]: 2025-10-08 16:19:57.586 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:19:57 compute-0 nova_compute[117413]: 2025-10-08 16:19:57.594 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:19:57 compute-0 nova_compute[117413]: 2025-10-08 16:19:57.684 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:19:57 compute-0 nova_compute[117413]: 2025-10-08 16:19:57.686 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:19:57 compute-0 nova_compute[117413]: 2025-10-08 16:19:57.748 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:19:57 compute-0 nova_compute[117413]: 2025-10-08 16:19:57.756 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/792f361b-347c-4139-b0a6-9eace69ac31d/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:19:57 compute-0 nova_compute[117413]: 2025-10-08 16:19:57.819 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/792f361b-347c-4139-b0a6-9eace69ac31d/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:19:57 compute-0 nova_compute[117413]: 2025-10-08 16:19:57.821 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/792f361b-347c-4139-b0a6-9eace69ac31d/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:19:57 compute-0 nova_compute[117413]: 2025-10-08 16:19:57.892 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/792f361b-347c-4139-b0a6-9eace69ac31d/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:19:57 compute-0 nova_compute[117413]: 2025-10-08 16:19:57.899 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:19:57 compute-0 nova_compute[117413]: 2025-10-08 16:19:57.959 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:19:57 compute-0 nova_compute[117413]: 2025-10-08 16:19:57.960 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:19:58 compute-0 nova_compute[117413]: 2025-10-08 16:19:58.031 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:19:58 compute-0 nova_compute[117413]: 2025-10-08 16:19:58.036 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a7fde225-dfe9-46d6-a12e-df2beab37b0c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:19:58 compute-0 nova_compute[117413]: 2025-10-08 16:19:58.091 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a7fde225-dfe9-46d6-a12e-df2beab37b0c/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:19:58 compute-0 nova_compute[117413]: 2025-10-08 16:19:58.092 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a7fde225-dfe9-46d6-a12e-df2beab37b0c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:19:58 compute-0 nova_compute[117413]: 2025-10-08 16:19:58.148 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a7fde225-dfe9-46d6-a12e-df2beab37b0c/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:19:58 compute-0 nova_compute[117413]: 2025-10-08 16:19:58.336 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:19:58 compute-0 nova_compute[117413]: 2025-10-08 16:19:58.338 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:19:58 compute-0 nova_compute[117413]: 2025-10-08 16:19:58.361 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.023s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:19:58 compute-0 nova_compute[117413]: 2025-10-08 16:19:58.362 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5362MB free_disk=73.11902618408203GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:19:58 compute-0 nova_compute[117413]: 2025-10-08 16:19:58.363 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:19:58 compute-0 nova_compute[117413]: 2025-10-08 16:19:58.363 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.108 2 DEBUG oslo_concurrency.lockutils [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "0429af3d-07c5-445e-bc3d-8df845af8e75" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.109 2 DEBUG oslo_concurrency.lockutils [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "0429af3d-07c5-445e-bc3d-8df845af8e75" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.111 2 DEBUG oslo_concurrency.lockutils [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "0429af3d-07c5-445e-bc3d-8df845af8e75-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.111 2 DEBUG oslo_concurrency.lockutils [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "0429af3d-07c5-445e-bc3d-8df845af8e75-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.112 2 DEBUG oslo_concurrency.lockutils [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "0429af3d-07c5-445e-bc3d-8df845af8e75-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.127 2 INFO nova.compute.manager [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Terminating instance
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.640 2 DEBUG nova.compute.manager [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:19:59 compute-0 kernel: tap14b9a922-7b (unregistering): left promiscuous mode
Oct 08 16:19:59 compute-0 NetworkManager[1034]: <info>  [1759940399.6672] device (tap14b9a922-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:19:59 compute-0 ovn_controller[19768]: 2025-10-08T16:19:59Z|00072|binding|INFO|Releasing lport 14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e from this chassis (sb_readonly=0)
Oct 08 16:19:59 compute-0 ovn_controller[19768]: 2025-10-08T16:19:59Z|00073|binding|INFO|Setting lport 14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e down in Southbound
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:59 compute-0 ovn_controller[19768]: 2025-10-08T16:19:59Z|00074|binding|INFO|Removing iface tap14b9a922-7b ovn-installed in OVS
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:59.688 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:63:14 10.100.0.12'], port_security=['fa:16:3e:26:63:14 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '0429af3d-07c5-445e-bc3d-8df845af8e75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36f986860cbf4338bf6afd8aa7b4d147', 'neutron:revision_number': '5', 'neutron:security_group_ids': '215a932b-a88a-4280-bc86-df394b56782c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c20bcb7-facc-40e4-a92a-7c3dfec236b2, chassis=[], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:19:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:59.691 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e in datapath cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b unbound from our chassis
Oct 08 16:19:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:59.692 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:59.712 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf8b104-d9f2-4dcf-b2f3-5a16748d3a88]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:59 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct 08 16:19:59 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000009.scope: Consumed 14.205s CPU time.
Oct 08 16:19:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:59.745 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[e9f519d6-8bc7-49b9-8596-07c585f4c001]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:59 compute-0 podman[127881]: time="2025-10-08T16:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:19:59 compute-0 systemd-machined[77548]: Machine qemu-4-instance-00000009 terminated.
Oct 08 16:19:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:59.749 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[8b41e715-541c-4432-a220-1c31e63ca378]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:19:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3481 "" "Go-http-client/1.1"
Oct 08 16:19:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:59.782 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[490868c7-0611-4e3f-a9e6-4a8e3333749a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:59 compute-0 podman[143692]: 2025-10-08 16:19:59.793287905 +0000 UTC m=+0.103694194 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:19:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:59.803 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3a7192-63a8-423d-bb7f-a61727a0279d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfb6ba7b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0b:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 13, 'rx_bytes': 1924, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 13, 'rx_bytes': 1924, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 150997, 'reachable_time': 30186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 143724, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:59.817 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[16fd03d4-0173-4011-b17e-7a03ccffd2f3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcfb6ba7b-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 151010, 'tstamp': 151010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143725, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcfb6ba7b-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 151014, 'tstamp': 151014}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143725, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:59.819 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfb6ba7b-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:19:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:59.825 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfb6ba7b-50, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:59.825 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:19:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:59.826 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfb6ba7b-50, col_values=(('external_ids', {'iface-id': 'bc02923c-7f95-45ae-9ad1-1ed85859f940'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:19:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:59.826 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:19:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:59.827 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[9d8e94fc-be7a-43be-97cb-51da2fd2a4ce]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.905 2 INFO nova.virt.libvirt.driver [-] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Instance destroyed successfully.
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.906 2 DEBUG nova.objects.instance [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lazy-loading 'resources' on Instance uuid 0429af3d-07c5-445e-bc3d-8df845af8e75 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.923 2 DEBUG nova.compute.manager [req-07841978-d068-4c29-aee5-080448fb10e8 req-4463bc98-c158-41a0-93fb-69992398cf6f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Received event network-vif-unplugged-14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.924 2 DEBUG oslo_concurrency.lockutils [req-07841978-d068-4c29-aee5-080448fb10e8 req-4463bc98-c158-41a0-93fb-69992398cf6f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "0429af3d-07c5-445e-bc3d-8df845af8e75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.924 2 DEBUG oslo_concurrency.lockutils [req-07841978-d068-4c29-aee5-080448fb10e8 req-4463bc98-c158-41a0-93fb-69992398cf6f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0429af3d-07c5-445e-bc3d-8df845af8e75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.924 2 DEBUG oslo_concurrency.lockutils [req-07841978-d068-4c29-aee5-080448fb10e8 req-4463bc98-c158-41a0-93fb-69992398cf6f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0429af3d-07c5-445e-bc3d-8df845af8e75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.925 2 DEBUG nova.compute.manager [req-07841978-d068-4c29-aee5-080448fb10e8 req-4463bc98-c158-41a0-93fb-69992398cf6f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] No waiting events found dispatching network-vif-unplugged-14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.925 2 DEBUG nova.compute.manager [req-07841978-d068-4c29-aee5-080448fb10e8 req-4463bc98-c158-41a0-93fb-69992398cf6f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Received event network-vif-unplugged-14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.955 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance 0f6f8aa7-8a43-4471-afed-4203d5b80b4c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.956 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance a11dbe8f-56c0-469a-91dc-1f2104aedd13 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.956 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance 0429af3d-07c5-445e-bc3d-8df845af8e75 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.956 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance a7fde225-dfe9-46d6-a12e-df2beab37b0c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.956 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance 792f361b-347c-4139-b0a6-9eace69ac31d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.957 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.957 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=79GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:19:58 up 28 min,  0 user,  load average: 0.55, 0.30, 0.30\n', 'num_instances': '5', 'num_vm_active': '5', 'num_task_None': '5', 'num_os_type_None': '5', 'num_proj_36f986860cbf4338bf6afd8aa7b4d147': '5', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:19:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:59.988 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:19:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:19:59.989 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:19:59 compute-0 nova_compute[117413]: 2025-10-08 16:19:59.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.013 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing inventories for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.062 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating ProviderTree inventory for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.062 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating inventory in ProviderTree for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.078 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing aggregate associations for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.097 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing trait associations for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8, traits: HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIRTIO_PACKED,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI2,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_SB16,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_ARCH_X86_64,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_SOUND_MODEL_ES1370,HW_ARCH_X86_64,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_VIRTIO_FS,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_TIS,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,HW_CPU_X86_MMX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_IGB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOUND_MODEL_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_SOUND_MODEL_AC97,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ADDRESS_SPACE_EMULATED,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOUND_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_CRB,HW_CPU_X86_SSE42,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_STORAGE_BUS_IDE _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.183 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.412 2 DEBUG nova.virt.libvirt.vif [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-08T16:18:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-182345068',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-182345068',id=9,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:19:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36f986860cbf4338bf6afd8aa7b4d147',ramdisk_id='',reservation_id='r-pwekqbnq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteActionsViaActuator-898376163',owner_user_name='tempest-TestExecuteActionsViaActuator-898376163-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:19:06Z,user_data=None,user_id='723962be4e3d48efb441d80077ac4263',uuid=0429af3d-07c5-445e-bc3d-8df845af8e75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e", "address": "fa:16:3e:26:63:14", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14b9a922-7b", "ovs_interfaceid": "14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.413 2 DEBUG nova.network.os_vif_util [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converting VIF {"id": "14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e", "address": "fa:16:3e:26:63:14", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14b9a922-7b", "ovs_interfaceid": "14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.415 2 DEBUG nova.network.os_vif_util [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:63:14,bridge_name='br-int',has_traffic_filtering=True,id=14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14b9a922-7b') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.416 2 DEBUG os_vif [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:63:14,bridge_name='br-int',has_traffic_filtering=True,id=14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14b9a922-7b') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.421 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14b9a922-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.428 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=886fde64-d1f6-4682-8a87-cfdc90f61137) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.432 2 INFO os_vif [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:63:14,bridge_name='br-int',has_traffic_filtering=True,id=14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14b9a922-7b')
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.433 2 INFO nova.virt.libvirt.driver [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Deleting instance files /var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75_del
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.434 2 INFO nova.virt.libvirt.driver [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Deletion of /var/lib/nova/instances/0429af3d-07c5-445e-bc3d-8df845af8e75_del complete
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.693 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.948 2 INFO nova.compute.manager [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Took 1.31 seconds to destroy the instance on the hypervisor.
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.948 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.949 2 DEBUG nova.compute.manager [-] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.949 2 DEBUG nova.network.neutron [-] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:20:00 compute-0 nova_compute[117413]: 2025-10-08 16:20:00.949 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:20:01 compute-0 nova_compute[117413]: 2025-10-08 16:20:01.205 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:20:01 compute-0 nova_compute[117413]: 2025-10-08 16:20:01.205 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.842s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:01 compute-0 openstack_network_exporter[130039]: ERROR   16:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:20:01 compute-0 openstack_network_exporter[130039]: ERROR   16:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:20:01 compute-0 openstack_network_exporter[130039]: ERROR   16:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:20:01 compute-0 openstack_network_exporter[130039]: ERROR   16:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:20:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:20:01 compute-0 openstack_network_exporter[130039]: ERROR   16:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:20:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:20:01 compute-0 nova_compute[117413]: 2025-10-08 16:20:01.693 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:20:01 compute-0 nova_compute[117413]: 2025-10-08 16:20:01.693 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:20:01 compute-0 nova_compute[117413]: 2025-10-08 16:20:01.694 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:20:01 compute-0 nova_compute[117413]: 2025-10-08 16:20:01.694 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:20:01 compute-0 nova_compute[117413]: 2025-10-08 16:20:01.694 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:20:01 compute-0 nova_compute[117413]: 2025-10-08 16:20:01.694 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:20:01 compute-0 nova_compute[117413]: 2025-10-08 16:20:01.694 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:20:01 compute-0 nova_compute[117413]: 2025-10-08 16:20:01.694 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:20:01 compute-0 nova_compute[117413]: 2025-10-08 16:20:01.706 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:20:01 compute-0 nova_compute[117413]: 2025-10-08 16:20:01.980 2 DEBUG nova.compute.manager [req-30c8240c-124b-4be1-846e-a36f7023c965 req-a29e84d2-ee7e-484f-93f4-ba5764f73405 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Received event network-vif-unplugged-14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:20:01 compute-0 nova_compute[117413]: 2025-10-08 16:20:01.980 2 DEBUG oslo_concurrency.lockutils [req-30c8240c-124b-4be1-846e-a36f7023c965 req-a29e84d2-ee7e-484f-93f4-ba5764f73405 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "0429af3d-07c5-445e-bc3d-8df845af8e75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:01 compute-0 nova_compute[117413]: 2025-10-08 16:20:01.981 2 DEBUG oslo_concurrency.lockutils [req-30c8240c-124b-4be1-846e-a36f7023c965 req-a29e84d2-ee7e-484f-93f4-ba5764f73405 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0429af3d-07c5-445e-bc3d-8df845af8e75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:01 compute-0 nova_compute[117413]: 2025-10-08 16:20:01.981 2 DEBUG oslo_concurrency.lockutils [req-30c8240c-124b-4be1-846e-a36f7023c965 req-a29e84d2-ee7e-484f-93f4-ba5764f73405 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0429af3d-07c5-445e-bc3d-8df845af8e75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:01 compute-0 nova_compute[117413]: 2025-10-08 16:20:01.981 2 DEBUG nova.compute.manager [req-30c8240c-124b-4be1-846e-a36f7023c965 req-a29e84d2-ee7e-484f-93f4-ba5764f73405 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] No waiting events found dispatching network-vif-unplugged-14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:20:01 compute-0 nova_compute[117413]: 2025-10-08 16:20:01.982 2 DEBUG nova.compute.manager [req-30c8240c-124b-4be1-846e-a36f7023c965 req-a29e84d2-ee7e-484f-93f4-ba5764f73405 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Received event network-vif-unplugged-14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:20:02 compute-0 systemd[1]: Starting system activity accounting tool...
Oct 08 16:20:02 compute-0 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct 08 16:20:02 compute-0 systemd[1]: Finished system activity accounting tool.
Oct 08 16:20:02 compute-0 nova_compute[117413]: 2025-10-08 16:20:02.456 2 DEBUG nova.network.neutron [-] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:20:02 compute-0 podman[143744]: 2025-10-08 16:20:02.465469853 +0000 UTC m=+0.071956120 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, distribution-scope=public, maintainer=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 08 16:20:02 compute-0 nova_compute[117413]: 2025-10-08 16:20:02.964 2 INFO nova.compute.manager [-] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Took 2.01 seconds to deallocate network for instance.
Oct 08 16:20:02 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:02.991 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:20:03 compute-0 nova_compute[117413]: 2025-10-08 16:20:03.498 2 DEBUG oslo_concurrency.lockutils [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:03 compute-0 nova_compute[117413]: 2025-10-08 16:20:03.499 2 DEBUG oslo_concurrency.lockutils [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:03 compute-0 nova_compute[117413]: 2025-10-08 16:20:03.615 2 DEBUG nova.compute.provider_tree [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:20:04 compute-0 nova_compute[117413]: 2025-10-08 16:20:04.045 2 DEBUG nova.compute.manager [req-8494cb83-ce21-4199-afcc-3c4a51fd41a2 req-0bca82ff-641e-4f27-964c-feada52d3ba2 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0429af3d-07c5-445e-bc3d-8df845af8e75] Received event network-vif-deleted-14b9a922-7b69-4e6a-9cf6-1fb5dc872a1e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:20:04 compute-0 nova_compute[117413]: 2025-10-08 16:20:04.124 2 DEBUG nova.scheduler.client.report [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:20:04 compute-0 nova_compute[117413]: 2025-10-08 16:20:04.635 2 DEBUG oslo_concurrency.lockutils [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.136s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:04 compute-0 nova_compute[117413]: 2025-10-08 16:20:04.674 2 INFO nova.scheduler.client.report [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Deleted allocations for instance 0429af3d-07c5-445e-bc3d-8df845af8e75
Oct 08 16:20:05 compute-0 nova_compute[117413]: 2025-10-08 16:20:05.357 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:20:05 compute-0 nova_compute[117413]: 2025-10-08 16:20:05.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:05 compute-0 nova_compute[117413]: 2025-10-08 16:20:05.708 2 DEBUG oslo_concurrency.lockutils [None req-8a35cbea-5d4a-4fbd-97ce-d389cb3e459d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "0429af3d-07c5-445e-bc3d-8df845af8e75" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.599s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:05 compute-0 nova_compute[117413]: 2025-10-08 16:20:05.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:06 compute-0 nova_compute[117413]: 2025-10-08 16:20:06.390 2 DEBUG oslo_concurrency.lockutils [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "792f361b-347c-4139-b0a6-9eace69ac31d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:06 compute-0 nova_compute[117413]: 2025-10-08 16:20:06.391 2 DEBUG oslo_concurrency.lockutils [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "792f361b-347c-4139-b0a6-9eace69ac31d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:06 compute-0 nova_compute[117413]: 2025-10-08 16:20:06.392 2 DEBUG oslo_concurrency.lockutils [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "792f361b-347c-4139-b0a6-9eace69ac31d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:06 compute-0 nova_compute[117413]: 2025-10-08 16:20:06.393 2 DEBUG oslo_concurrency.lockutils [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "792f361b-347c-4139-b0a6-9eace69ac31d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:06 compute-0 nova_compute[117413]: 2025-10-08 16:20:06.394 2 DEBUG oslo_concurrency.lockutils [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "792f361b-347c-4139-b0a6-9eace69ac31d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:06 compute-0 nova_compute[117413]: 2025-10-08 16:20:06.409 2 INFO nova.compute.manager [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Terminating instance
Oct 08 16:20:06 compute-0 nova_compute[117413]: 2025-10-08 16:20:06.929 2 DEBUG nova.compute.manager [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:20:06 compute-0 kernel: tap6a3204f0-27 (unregistering): left promiscuous mode
Oct 08 16:20:06 compute-0 NetworkManager[1034]: <info>  [1759940406.9633] device (tap6a3204f0-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:20:07 compute-0 ovn_controller[19768]: 2025-10-08T16:20:07Z|00075|binding|INFO|Releasing lport 6a3204f0-278d-4801-ad42-56b69d44ee1e from this chassis (sb_readonly=0)
Oct 08 16:20:07 compute-0 ovn_controller[19768]: 2025-10-08T16:20:07Z|00076|binding|INFO|Setting lport 6a3204f0-278d-4801-ad42-56b69d44ee1e down in Southbound
Oct 08 16:20:07 compute-0 ovn_controller[19768]: 2025-10-08T16:20:07Z|00077|binding|INFO|Removing iface tap6a3204f0-27 ovn-installed in OVS
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:07 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:07.018 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:e4:b5 10.100.0.6'], port_security=['fa:16:3e:6f:e4:b5 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '792f361b-347c-4139-b0a6-9eace69ac31d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36f986860cbf4338bf6afd8aa7b4d147', 'neutron:revision_number': '10', 'neutron:security_group_ids': '215a932b-a88a-4280-bc86-df394b56782c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c20bcb7-facc-40e4-a92a-7c3dfec236b2, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=6a3204f0-278d-4801-ad42-56b69d44ee1e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:20:07 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:07.019 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 6a3204f0-278d-4801-ad42-56b69d44ee1e in datapath cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b unbound from our chassis
Oct 08 16:20:07 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:07.020 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:07 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:07.045 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[94c87d51-07f3-4536-b6bb-28201d535bda]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:07 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000008.scope: Deactivated successfully.
Oct 08 16:20:07 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000008.scope: Consumed 13.083s CPU time.
Oct 08 16:20:07 compute-0 systemd-machined[77548]: Machine qemu-6-instance-00000008 terminated.
Oct 08 16:20:07 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:07.103 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[e36e5af7-3ee4-4407-a693-aa3ff4d3d064]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:07 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:07.107 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[1a3da3a7-d7c1-4831-9740-59558e40df09]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:07 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:07.146 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[81511b15-7962-4e9e-9c98-7a7d12095870]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:07 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:07.164 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e0ad32bc-0023-402f-a108-3197fafc558c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfb6ba7b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0b:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 15, 'rx_bytes': 2008, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 15, 'rx_bytes': 2008, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 150997, 'reachable_time': 30186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 143778, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:07 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:07.182 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[2d9f6888-d56c-431d-8d71-db33373ee475]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcfb6ba7b-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 151010, 'tstamp': 151010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143784, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcfb6ba7b-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 151014, 'tstamp': 151014}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143784, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:07 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:07.184 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfb6ba7b-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:07 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:07.193 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfb6ba7b-50, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:20:07 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:07.194 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:20:07 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:07.194 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfb6ba7b-50, col_values=(('external_ids', {'iface-id': 'bc02923c-7f95-45ae-9ad1-1ed85859f940'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:20:07 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:07.194 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:20:07 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:07.195 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[ac583b19-c702-41cf-b1c1-bf83ecc7136b]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.206 2 INFO nova.virt.libvirt.driver [-] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Instance destroyed successfully.
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.208 2 DEBUG nova.objects.instance [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lazy-loading 'resources' on Instance uuid 792f361b-347c-4139-b0a6-9eace69ac31d obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.209 2 DEBUG nova.compute.manager [req-e93f55df-6ab3-44e8-bfdb-8fc5974c262f req-5592a466-1ede-49a2-8cdd-aef7d4a984a5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Received event network-vif-unplugged-6a3204f0-278d-4801-ad42-56b69d44ee1e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.210 2 DEBUG oslo_concurrency.lockutils [req-e93f55df-6ab3-44e8-bfdb-8fc5974c262f req-5592a466-1ede-49a2-8cdd-aef7d4a984a5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "792f361b-347c-4139-b0a6-9eace69ac31d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.210 2 DEBUG oslo_concurrency.lockutils [req-e93f55df-6ab3-44e8-bfdb-8fc5974c262f req-5592a466-1ede-49a2-8cdd-aef7d4a984a5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "792f361b-347c-4139-b0a6-9eace69ac31d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.210 2 DEBUG oslo_concurrency.lockutils [req-e93f55df-6ab3-44e8-bfdb-8fc5974c262f req-5592a466-1ede-49a2-8cdd-aef7d4a984a5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "792f361b-347c-4139-b0a6-9eace69ac31d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.211 2 DEBUG nova.compute.manager [req-e93f55df-6ab3-44e8-bfdb-8fc5974c262f req-5592a466-1ede-49a2-8cdd-aef7d4a984a5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] No waiting events found dispatching network-vif-unplugged-6a3204f0-278d-4801-ad42-56b69d44ee1e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.211 2 DEBUG nova.compute.manager [req-e93f55df-6ab3-44e8-bfdb-8fc5974c262f req-5592a466-1ede-49a2-8cdd-aef7d4a984a5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Received event network-vif-unplugged-6a3204f0-278d-4801-ad42-56b69d44ee1e for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.715 2 DEBUG nova.virt.libvirt.vif [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-08T16:18:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-556686919',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-556686919',id=8,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:19:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36f986860cbf4338bf6afd8aa7b4d147',ramdisk_id='',reservation_id='r-0b36uv99',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteActionsViaActuator-898376163',owner_user_name='tempest-TestExecuteActionsViaActuator-898376163-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:19:54Z,user_data=None,user_id='723962be4e3d48efb441d80077ac4263',uuid=792f361b-347c-4139-b0a6-9eace69ac31d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6a3204f0-278d-4801-ad42-56b69d44ee1e", "address": "fa:16:3e:6f:e4:b5", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a3204f0-27", "ovs_interfaceid": "6a3204f0-278d-4801-ad42-56b69d44ee1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.716 2 DEBUG nova.network.os_vif_util [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converting VIF {"id": "6a3204f0-278d-4801-ad42-56b69d44ee1e", "address": "fa:16:3e:6f:e4:b5", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a3204f0-27", "ovs_interfaceid": "6a3204f0-278d-4801-ad42-56b69d44ee1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.717 2 DEBUG nova.network.os_vif_util [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6f:e4:b5,bridge_name='br-int',has_traffic_filtering=True,id=6a3204f0-278d-4801-ad42-56b69d44ee1e,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a3204f0-27') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.717 2 DEBUG os_vif [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:e4:b5,bridge_name='br-int',has_traffic_filtering=True,id=6a3204f0-278d-4801-ad42-56b69d44ee1e,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a3204f0-27') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.719 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a3204f0-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.724 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=cf17d6a6-b3d3-4d9b-97e5-ec9b6f5224af) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.728 2 INFO os_vif [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:e4:b5,bridge_name='br-int',has_traffic_filtering=True,id=6a3204f0-278d-4801-ad42-56b69d44ee1e,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a3204f0-27')
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.728 2 INFO nova.virt.libvirt.driver [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Deleting instance files /var/lib/nova/instances/792f361b-347c-4139-b0a6-9eace69ac31d_del
Oct 08 16:20:07 compute-0 nova_compute[117413]: 2025-10-08 16:20:07.736 2 INFO nova.virt.libvirt.driver [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Deletion of /var/lib/nova/instances/792f361b-347c-4139-b0a6-9eace69ac31d_del complete
Oct 08 16:20:08 compute-0 nova_compute[117413]: 2025-10-08 16:20:08.249 2 INFO nova.compute.manager [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Took 1.32 seconds to destroy the instance on the hypervisor.
Oct 08 16:20:08 compute-0 nova_compute[117413]: 2025-10-08 16:20:08.250 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:20:08 compute-0 nova_compute[117413]: 2025-10-08 16:20:08.250 2 DEBUG nova.compute.manager [-] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:20:08 compute-0 nova_compute[117413]: 2025-10-08 16:20:08.250 2 DEBUG nova.network.neutron [-] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:20:08 compute-0 nova_compute[117413]: 2025-10-08 16:20:08.250 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:20:08 compute-0 nova_compute[117413]: 2025-10-08 16:20:08.764 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:20:09 compute-0 nova_compute[117413]: 2025-10-08 16:20:09.258 2 DEBUG nova.compute.manager [req-e46a280c-35a9-49bf-9f09-a04085270b19 req-80f51933-391e-448e-9d96-21d00cac37a5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Received event network-vif-unplugged-6a3204f0-278d-4801-ad42-56b69d44ee1e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:20:09 compute-0 nova_compute[117413]: 2025-10-08 16:20:09.259 2 DEBUG oslo_concurrency.lockutils [req-e46a280c-35a9-49bf-9f09-a04085270b19 req-80f51933-391e-448e-9d96-21d00cac37a5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "792f361b-347c-4139-b0a6-9eace69ac31d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:09 compute-0 nova_compute[117413]: 2025-10-08 16:20:09.259 2 DEBUG oslo_concurrency.lockutils [req-e46a280c-35a9-49bf-9f09-a04085270b19 req-80f51933-391e-448e-9d96-21d00cac37a5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "792f361b-347c-4139-b0a6-9eace69ac31d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:09 compute-0 nova_compute[117413]: 2025-10-08 16:20:09.260 2 DEBUG oslo_concurrency.lockutils [req-e46a280c-35a9-49bf-9f09-a04085270b19 req-80f51933-391e-448e-9d96-21d00cac37a5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "792f361b-347c-4139-b0a6-9eace69ac31d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:09 compute-0 nova_compute[117413]: 2025-10-08 16:20:09.260 2 DEBUG nova.compute.manager [req-e46a280c-35a9-49bf-9f09-a04085270b19 req-80f51933-391e-448e-9d96-21d00cac37a5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] No waiting events found dispatching network-vif-unplugged-6a3204f0-278d-4801-ad42-56b69d44ee1e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:20:09 compute-0 nova_compute[117413]: 2025-10-08 16:20:09.261 2 DEBUG nova.compute.manager [req-e46a280c-35a9-49bf-9f09-a04085270b19 req-80f51933-391e-448e-9d96-21d00cac37a5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Received event network-vif-unplugged-6a3204f0-278d-4801-ad42-56b69d44ee1e for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:20:09 compute-0 nova_compute[117413]: 2025-10-08 16:20:09.261 2 DEBUG nova.compute.manager [req-e46a280c-35a9-49bf-9f09-a04085270b19 req-80f51933-391e-448e-9d96-21d00cac37a5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Received event network-vif-deleted-6a3204f0-278d-4801-ad42-56b69d44ee1e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:20:09 compute-0 nova_compute[117413]: 2025-10-08 16:20:09.261 2 INFO nova.compute.manager [req-e46a280c-35a9-49bf-9f09-a04085270b19 req-80f51933-391e-448e-9d96-21d00cac37a5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Neutron deleted interface 6a3204f0-278d-4801-ad42-56b69d44ee1e; detaching it from the instance and deleting it from the info cache
Oct 08 16:20:09 compute-0 nova_compute[117413]: 2025-10-08 16:20:09.262 2 DEBUG nova.network.neutron [req-e46a280c-35a9-49bf-9f09-a04085270b19 req-80f51933-391e-448e-9d96-21d00cac37a5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:20:09 compute-0 podman[143797]: 2025-10-08 16:20:09.499650508 +0000 UTC m=+0.094143849 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, io.buildah.version=1.41.4)
Oct 08 16:20:09 compute-0 nova_compute[117413]: 2025-10-08 16:20:09.539 2 DEBUG nova.network.neutron [-] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:20:09 compute-0 nova_compute[117413]: 2025-10-08 16:20:09.771 2 DEBUG nova.compute.manager [req-e46a280c-35a9-49bf-9f09-a04085270b19 req-80f51933-391e-448e-9d96-21d00cac37a5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Detach interface failed, port_id=6a3204f0-278d-4801-ad42-56b69d44ee1e, reason: Instance 792f361b-347c-4139-b0a6-9eace69ac31d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 08 16:20:10 compute-0 nova_compute[117413]: 2025-10-08 16:20:10.046 2 INFO nova.compute.manager [-] [instance: 792f361b-347c-4139-b0a6-9eace69ac31d] Took 1.80 seconds to deallocate network for instance.
Oct 08 16:20:10 compute-0 nova_compute[117413]: 2025-10-08 16:20:10.572 2 DEBUG oslo_concurrency.lockutils [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:10 compute-0 nova_compute[117413]: 2025-10-08 16:20:10.572 2 DEBUG oslo_concurrency.lockutils [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:10 compute-0 nova_compute[117413]: 2025-10-08 16:20:10.660 2 DEBUG nova.compute.provider_tree [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:20:10 compute-0 nova_compute[117413]: 2025-10-08 16:20:10.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:11 compute-0 nova_compute[117413]: 2025-10-08 16:20:11.166 2 DEBUG nova.scheduler.client.report [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:20:11 compute-0 nova_compute[117413]: 2025-10-08 16:20:11.675 2 DEBUG oslo_concurrency.lockutils [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.103s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:11 compute-0 nova_compute[117413]: 2025-10-08 16:20:11.700 2 INFO nova.scheduler.client.report [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Deleted allocations for instance 792f361b-347c-4139-b0a6-9eace69ac31d
Oct 08 16:20:12 compute-0 nova_compute[117413]: 2025-10-08 16:20:12.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:12 compute-0 nova_compute[117413]: 2025-10-08 16:20:12.734 2 DEBUG oslo_concurrency.lockutils [None req-89dccbcb-bb3f-4bde-9354-91756dd435a3 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "792f361b-347c-4139-b0a6-9eace69ac31d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.343s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:14 compute-0 podman[143821]: 2025-10-08 16:20:14.500027537 +0000 UTC m=+0.078476862 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 08 16:20:15 compute-0 nova_compute[117413]: 2025-10-08 16:20:15.087 2 DEBUG oslo_concurrency.lockutils [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:15 compute-0 nova_compute[117413]: 2025-10-08 16:20:15.088 2 DEBUG oslo_concurrency.lockutils [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:15 compute-0 nova_compute[117413]: 2025-10-08 16:20:15.088 2 DEBUG oslo_concurrency.lockutils [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:15 compute-0 nova_compute[117413]: 2025-10-08 16:20:15.088 2 DEBUG oslo_concurrency.lockutils [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:15 compute-0 nova_compute[117413]: 2025-10-08 16:20:15.088 2 DEBUG oslo_concurrency.lockutils [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:15 compute-0 nova_compute[117413]: 2025-10-08 16:20:15.101 2 INFO nova.compute.manager [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Terminating instance
Oct 08 16:20:15 compute-0 nova_compute[117413]: 2025-10-08 16:20:15.616 2 DEBUG nova.compute.manager [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:20:15 compute-0 kernel: tap8ec197ec-6e (unregistering): left promiscuous mode
Oct 08 16:20:15 compute-0 NetworkManager[1034]: <info>  [1759940415.6531] device (tap8ec197ec-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:20:15 compute-0 ovn_controller[19768]: 2025-10-08T16:20:15Z|00078|binding|INFO|Releasing lport 8ec197ec-6e84-4cdb-8907-b92c269e3285 from this chassis (sb_readonly=0)
Oct 08 16:20:15 compute-0 nova_compute[117413]: 2025-10-08 16:20:15.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:15 compute-0 ovn_controller[19768]: 2025-10-08T16:20:15Z|00079|binding|INFO|Setting lport 8ec197ec-6e84-4cdb-8907-b92c269e3285 down in Southbound
Oct 08 16:20:15 compute-0 ovn_controller[19768]: 2025-10-08T16:20:15Z|00080|binding|INFO|Removing iface tap8ec197ec-6e ovn-installed in OVS
Oct 08 16:20:15 compute-0 nova_compute[117413]: 2025-10-08 16:20:15.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:15.675 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:50:fa 10.100.0.3'], port_security=['fa:16:3e:93:50:fa 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'a11dbe8f-56c0-469a-91dc-1f2104aedd13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36f986860cbf4338bf6afd8aa7b4d147', 'neutron:revision_number': '5', 'neutron:security_group_ids': '215a932b-a88a-4280-bc86-df394b56782c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c20bcb7-facc-40e4-a92a-7c3dfec236b2, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=8ec197ec-6e84-4cdb-8907-b92c269e3285) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:20:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:15.676 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 8ec197ec-6e84-4cdb-8907-b92c269e3285 in datapath cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b unbound from our chassis
Oct 08 16:20:15 compute-0 nova_compute[117413]: 2025-10-08 16:20:15.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:15.678 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b
Oct 08 16:20:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:15.703 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[0c42e526-7c88-461c-af2a-0e6dedd6b8a0]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:15 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct 08 16:20:15 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Consumed 17.236s CPU time.
Oct 08 16:20:15 compute-0 systemd-machined[77548]: Machine qemu-3-instance-00000007 terminated.
Oct 08 16:20:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:15.750 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[3a3f01d0-c92b-4c2f-9509-0aa0d8b78c9e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:15.753 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[71743e47-6d7a-40b6-b577-de616f069ddc]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:15 compute-0 nova_compute[117413]: 2025-10-08 16:20:15.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:15.795 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[066f9e5d-e9d6-42e2-893e-ad460ae192db]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:15.816 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[082a7bd7-a66d-4b19-a09f-ab52f3654fd5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfb6ba7b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0b:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 17, 'rx_bytes': 2008, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 17, 'rx_bytes': 2008, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 150997, 'reachable_time': 30186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 143852, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:15.843 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[ae52d9ab-603e-42da-8839-2be841dd1188]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcfb6ba7b-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 151010, 'tstamp': 151010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143853, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcfb6ba7b-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 151014, 'tstamp': 151014}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143853, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:15.846 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfb6ba7b-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:20:15 compute-0 nova_compute[117413]: 2025-10-08 16:20:15.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:15 compute-0 nova_compute[117413]: 2025-10-08 16:20:15.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:15 compute-0 nova_compute[117413]: 2025-10-08 16:20:15.899 2 INFO nova.virt.libvirt.driver [-] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Instance destroyed successfully.
Oct 08 16:20:15 compute-0 nova_compute[117413]: 2025-10-08 16:20:15.900 2 DEBUG nova.objects.instance [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lazy-loading 'resources' on Instance uuid a11dbe8f-56c0-469a-91dc-1f2104aedd13 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:20:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:15.900 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfb6ba7b-50, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:20:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:15.900 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:20:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:15.901 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfb6ba7b-50, col_values=(('external_ids', {'iface-id': 'bc02923c-7f95-45ae-9ad1-1ed85859f940'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:20:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:15.901 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:20:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:15.903 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[5777f1bd-3236-4528-9c02-883a741af8da]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:15 compute-0 nova_compute[117413]: 2025-10-08 16:20:15.973 2 DEBUG nova.compute.manager [req-85addc8b-6880-4f87-a6db-2656b067488f req-02564a6a-a71d-4ce3-88ec-c4dee04dcf11 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Received event network-vif-unplugged-8ec197ec-6e84-4cdb-8907-b92c269e3285 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:20:15 compute-0 nova_compute[117413]: 2025-10-08 16:20:15.973 2 DEBUG oslo_concurrency.lockutils [req-85addc8b-6880-4f87-a6db-2656b067488f req-02564a6a-a71d-4ce3-88ec-c4dee04dcf11 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:15 compute-0 nova_compute[117413]: 2025-10-08 16:20:15.974 2 DEBUG oslo_concurrency.lockutils [req-85addc8b-6880-4f87-a6db-2656b067488f req-02564a6a-a71d-4ce3-88ec-c4dee04dcf11 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:15 compute-0 nova_compute[117413]: 2025-10-08 16:20:15.974 2 DEBUG oslo_concurrency.lockutils [req-85addc8b-6880-4f87-a6db-2656b067488f req-02564a6a-a71d-4ce3-88ec-c4dee04dcf11 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:15 compute-0 nova_compute[117413]: 2025-10-08 16:20:15.974 2 DEBUG nova.compute.manager [req-85addc8b-6880-4f87-a6db-2656b067488f req-02564a6a-a71d-4ce3-88ec-c4dee04dcf11 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] No waiting events found dispatching network-vif-unplugged-8ec197ec-6e84-4cdb-8907-b92c269e3285 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:20:15 compute-0 nova_compute[117413]: 2025-10-08 16:20:15.974 2 DEBUG nova.compute.manager [req-85addc8b-6880-4f87-a6db-2656b067488f req-02564a6a-a71d-4ce3-88ec-c4dee04dcf11 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Received event network-vif-unplugged-8ec197ec-6e84-4cdb-8907-b92c269e3285 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:20:16 compute-0 nova_compute[117413]: 2025-10-08 16:20:16.406 2 DEBUG nova.virt.libvirt.vif [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-08T16:18:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-284191439',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-284191439',id=7,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:18:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36f986860cbf4338bf6afd8aa7b4d147',ramdisk_id='',reservation_id='r-51qp54u6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteActionsViaActuator-898376163',owner_user_name='tempest-TestExecuteActionsViaActuator-898376163-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:18:22Z,user_data=None,user_id='723962be4e3d48efb441d80077ac4263',uuid=a11dbe8f-56c0-469a-91dc-1f2104aedd13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ec197ec-6e84-4cdb-8907-b92c269e3285", "address": "fa:16:3e:93:50:fa", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec197ec-6e", "ovs_interfaceid": "8ec197ec-6e84-4cdb-8907-b92c269e3285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:20:16 compute-0 nova_compute[117413]: 2025-10-08 16:20:16.407 2 DEBUG nova.network.os_vif_util [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converting VIF {"id": "8ec197ec-6e84-4cdb-8907-b92c269e3285", "address": "fa:16:3e:93:50:fa", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ec197ec-6e", "ovs_interfaceid": "8ec197ec-6e84-4cdb-8907-b92c269e3285", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:20:16 compute-0 nova_compute[117413]: 2025-10-08 16:20:16.407 2 DEBUG nova.network.os_vif_util [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:50:fa,bridge_name='br-int',has_traffic_filtering=True,id=8ec197ec-6e84-4cdb-8907-b92c269e3285,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec197ec-6e') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:20:16 compute-0 nova_compute[117413]: 2025-10-08 16:20:16.408 2 DEBUG os_vif [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:50:fa,bridge_name='br-int',has_traffic_filtering=True,id=8ec197ec-6e84-4cdb-8907-b92c269e3285,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec197ec-6e') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:20:16 compute-0 nova_compute[117413]: 2025-10-08 16:20:16.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:16 compute-0 nova_compute[117413]: 2025-10-08 16:20:16.410 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ec197ec-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:20:16 compute-0 nova_compute[117413]: 2025-10-08 16:20:16.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:16 compute-0 nova_compute[117413]: 2025-10-08 16:20:16.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:20:16 compute-0 nova_compute[117413]: 2025-10-08 16:20:16.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:16 compute-0 nova_compute[117413]: 2025-10-08 16:20:16.415 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=0c0cfb76-46a3-4b87-aad5-5e9db60e8f89) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:20:16 compute-0 nova_compute[117413]: 2025-10-08 16:20:16.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:16 compute-0 nova_compute[117413]: 2025-10-08 16:20:16.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:16 compute-0 nova_compute[117413]: 2025-10-08 16:20:16.418 2 INFO os_vif [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:50:fa,bridge_name='br-int',has_traffic_filtering=True,id=8ec197ec-6e84-4cdb-8907-b92c269e3285,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ec197ec-6e')
Oct 08 16:20:16 compute-0 nova_compute[117413]: 2025-10-08 16:20:16.419 2 INFO nova.virt.libvirt.driver [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Deleting instance files /var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13_del
Oct 08 16:20:16 compute-0 nova_compute[117413]: 2025-10-08 16:20:16.419 2 INFO nova.virt.libvirt.driver [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Deletion of /var/lib/nova/instances/a11dbe8f-56c0-469a-91dc-1f2104aedd13_del complete
Oct 08 16:20:16 compute-0 nova_compute[117413]: 2025-10-08 16:20:16.934 2 INFO nova.compute.manager [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Took 1.32 seconds to destroy the instance on the hypervisor.
Oct 08 16:20:16 compute-0 nova_compute[117413]: 2025-10-08 16:20:16.935 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:20:16 compute-0 nova_compute[117413]: 2025-10-08 16:20:16.936 2 DEBUG nova.compute.manager [-] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:20:16 compute-0 nova_compute[117413]: 2025-10-08 16:20:16.936 2 DEBUG nova.network.neutron [-] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:20:16 compute-0 nova_compute[117413]: 2025-10-08 16:20:16.936 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:20:17 compute-0 nova_compute[117413]: 2025-10-08 16:20:17.153 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:20:17 compute-0 nova_compute[117413]: 2025-10-08 16:20:17.446 2 DEBUG nova.compute.manager [req-e5d4207e-b560-4972-a410-14195f1d8ed9 req-81503d33-0871-4215-835f-e151e9475d31 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Received event network-vif-deleted-8ec197ec-6e84-4cdb-8907-b92c269e3285 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:20:17 compute-0 nova_compute[117413]: 2025-10-08 16:20:17.447 2 INFO nova.compute.manager [req-e5d4207e-b560-4972-a410-14195f1d8ed9 req-81503d33-0871-4215-835f-e151e9475d31 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Neutron deleted interface 8ec197ec-6e84-4cdb-8907-b92c269e3285; detaching it from the instance and deleting it from the info cache
Oct 08 16:20:17 compute-0 nova_compute[117413]: 2025-10-08 16:20:17.447 2 DEBUG nova.network.neutron [req-e5d4207e-b560-4972-a410-14195f1d8ed9 req-81503d33-0871-4215-835f-e151e9475d31 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:20:17 compute-0 nova_compute[117413]: 2025-10-08 16:20:17.896 2 DEBUG nova.network.neutron [-] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:20:17 compute-0 nova_compute[117413]: 2025-10-08 16:20:17.954 2 DEBUG nova.compute.manager [req-e5d4207e-b560-4972-a410-14195f1d8ed9 req-81503d33-0871-4215-835f-e151e9475d31 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Detach interface failed, port_id=8ec197ec-6e84-4cdb-8907-b92c269e3285, reason: Instance a11dbe8f-56c0-469a-91dc-1f2104aedd13 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 08 16:20:18 compute-0 nova_compute[117413]: 2025-10-08 16:20:18.045 2 DEBUG nova.compute.manager [req-38777913-666b-4255-b14f-9a9080ee85b2 req-0b37691d-29d9-4eb0-acef-d3c61599113d c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Received event network-vif-unplugged-8ec197ec-6e84-4cdb-8907-b92c269e3285 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:20:18 compute-0 nova_compute[117413]: 2025-10-08 16:20:18.045 2 DEBUG oslo_concurrency.lockutils [req-38777913-666b-4255-b14f-9a9080ee85b2 req-0b37691d-29d9-4eb0-acef-d3c61599113d c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:18 compute-0 nova_compute[117413]: 2025-10-08 16:20:18.045 2 DEBUG oslo_concurrency.lockutils [req-38777913-666b-4255-b14f-9a9080ee85b2 req-0b37691d-29d9-4eb0-acef-d3c61599113d c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:18 compute-0 nova_compute[117413]: 2025-10-08 16:20:18.046 2 DEBUG oslo_concurrency.lockutils [req-38777913-666b-4255-b14f-9a9080ee85b2 req-0b37691d-29d9-4eb0-acef-d3c61599113d c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:18 compute-0 nova_compute[117413]: 2025-10-08 16:20:18.046 2 DEBUG nova.compute.manager [req-38777913-666b-4255-b14f-9a9080ee85b2 req-0b37691d-29d9-4eb0-acef-d3c61599113d c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] No waiting events found dispatching network-vif-unplugged-8ec197ec-6e84-4cdb-8907-b92c269e3285 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:20:18 compute-0 nova_compute[117413]: 2025-10-08 16:20:18.046 2 DEBUG nova.compute.manager [req-38777913-666b-4255-b14f-9a9080ee85b2 req-0b37691d-29d9-4eb0-acef-d3c61599113d c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Received event network-vif-unplugged-8ec197ec-6e84-4cdb-8907-b92c269e3285 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:20:18 compute-0 nova_compute[117413]: 2025-10-08 16:20:18.403 2 INFO nova.compute.manager [-] [instance: a11dbe8f-56c0-469a-91dc-1f2104aedd13] Took 1.47 seconds to deallocate network for instance.
Oct 08 16:20:18 compute-0 nova_compute[117413]: 2025-10-08 16:20:18.931 2 DEBUG oslo_concurrency.lockutils [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:18 compute-0 nova_compute[117413]: 2025-10-08 16:20:18.932 2 DEBUG oslo_concurrency.lockutils [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:19 compute-0 nova_compute[117413]: 2025-10-08 16:20:19.038 2 DEBUG nova.compute.provider_tree [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:20:19 compute-0 podman[143872]: 2025-10-08 16:20:19.476320088 +0000 UTC m=+0.075162468 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 08 16:20:19 compute-0 podman[143873]: 2025-10-08 16:20:19.512185772 +0000 UTC m=+0.105568016 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.4)
Oct 08 16:20:19 compute-0 nova_compute[117413]: 2025-10-08 16:20:19.549 2 DEBUG nova.scheduler.client.report [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:20:20 compute-0 nova_compute[117413]: 2025-10-08 16:20:20.061 2 DEBUG oslo_concurrency.lockutils [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.129s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:20 compute-0 nova_compute[117413]: 2025-10-08 16:20:20.079 2 INFO nova.scheduler.client.report [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Deleted allocations for instance a11dbe8f-56c0-469a-91dc-1f2104aedd13
Oct 08 16:20:20 compute-0 nova_compute[117413]: 2025-10-08 16:20:20.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:21 compute-0 nova_compute[117413]: 2025-10-08 16:20:21.109 2 DEBUG oslo_concurrency.lockutils [None req-8a9cf96a-6732-4904-854c-ab906118682b 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "a11dbe8f-56c0-469a-91dc-1f2104aedd13" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.021s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:21 compute-0 nova_compute[117413]: 2025-10-08 16:20:21.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:21 compute-0 nova_compute[117413]: 2025-10-08 16:20:21.794 2 DEBUG oslo_concurrency.lockutils [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "a7fde225-dfe9-46d6-a12e-df2beab37b0c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:21 compute-0 nova_compute[117413]: 2025-10-08 16:20:21.795 2 DEBUG oslo_concurrency.lockutils [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "a7fde225-dfe9-46d6-a12e-df2beab37b0c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:21 compute-0 nova_compute[117413]: 2025-10-08 16:20:21.796 2 DEBUG oslo_concurrency.lockutils [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "a7fde225-dfe9-46d6-a12e-df2beab37b0c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:21 compute-0 nova_compute[117413]: 2025-10-08 16:20:21.796 2 DEBUG oslo_concurrency.lockutils [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "a7fde225-dfe9-46d6-a12e-df2beab37b0c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:21 compute-0 nova_compute[117413]: 2025-10-08 16:20:21.796 2 DEBUG oslo_concurrency.lockutils [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "a7fde225-dfe9-46d6-a12e-df2beab37b0c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:21 compute-0 nova_compute[117413]: 2025-10-08 16:20:21.810 2 INFO nova.compute.manager [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Terminating instance
Oct 08 16:20:22 compute-0 nova_compute[117413]: 2025-10-08 16:20:22.324 2 DEBUG nova.compute.manager [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:20:22 compute-0 kernel: tap96506174-a9 (unregistering): left promiscuous mode
Oct 08 16:20:22 compute-0 NetworkManager[1034]: <info>  [1759940422.3539] device (tap96506174-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:20:22 compute-0 ovn_controller[19768]: 2025-10-08T16:20:22Z|00081|binding|INFO|Releasing lport 96506174-a98c-4a5a-ae80-848833d70dbb from this chassis (sb_readonly=0)
Oct 08 16:20:22 compute-0 ovn_controller[19768]: 2025-10-08T16:20:22Z|00082|binding|INFO|Setting lport 96506174-a98c-4a5a-ae80-848833d70dbb down in Southbound
Oct 08 16:20:22 compute-0 ovn_controller[19768]: 2025-10-08T16:20:22Z|00083|binding|INFO|Removing iface tap96506174-a9 ovn-installed in OVS
Oct 08 16:20:22 compute-0 nova_compute[117413]: 2025-10-08 16:20:22.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:22 compute-0 nova_compute[117413]: 2025-10-08 16:20:22.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:22.435 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:2b:94 10.100.0.14'], port_security=['fa:16:3e:e8:2b:94 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a7fde225-dfe9-46d6-a12e-df2beab37b0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36f986860cbf4338bf6afd8aa7b4d147', 'neutron:revision_number': '15', 'neutron:security_group_ids': '215a932b-a88a-4280-bc86-df394b56782c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c20bcb7-facc-40e4-a92a-7c3dfec236b2, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=96506174-a98c-4a5a-ae80-848833d70dbb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:20:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:22.436 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 96506174-a98c-4a5a-ae80-848833d70dbb in datapath cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b unbound from our chassis
Oct 08 16:20:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:22.438 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b
Oct 08 16:20:22 compute-0 nova_compute[117413]: 2025-10-08 16:20:22.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:22.456 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e29258-4689-4c48-b09b-eebb8b2bdc28]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:22 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct 08 16:20:22 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000006.scope: Consumed 4.039s CPU time.
Oct 08 16:20:22 compute-0 systemd-machined[77548]: Machine qemu-5-instance-00000006 terminated.
Oct 08 16:20:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:22.498 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[48c147de-f9e2-402d-943e-166dbc689b4a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:22.501 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ed5662-eacb-4d62-9f46-4cb59bba4537]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:22.544 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[ad122980-d677-40dc-a5c8-34f9566f7721]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:22 compute-0 nova_compute[117413]: 2025-10-08 16:20:22.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:22 compute-0 nova_compute[117413]: 2025-10-08 16:20:22.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:22.579 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[6c331059-7180-42e2-8965-52b24a2707c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfb6ba7b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0b:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 19, 'rx_bytes': 2008, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 19, 'rx_bytes': 2008, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 150997, 'reachable_time': 30186, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 143937, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:22.605 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[5b99aad5-946c-4b1a-badc-e0b5c699b49b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcfb6ba7b-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 151010, 'tstamp': 151010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143946, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcfb6ba7b-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 151014, 'tstamp': 151014}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 143946, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:22.607 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfb6ba7b-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:20:22 compute-0 nova_compute[117413]: 2025-10-08 16:20:22.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:22 compute-0 nova_compute[117413]: 2025-10-08 16:20:22.611 2 INFO nova.virt.libvirt.driver [-] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Instance destroyed successfully.
Oct 08 16:20:22 compute-0 nova_compute[117413]: 2025-10-08 16:20:22.611 2 DEBUG nova.objects.instance [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lazy-loading 'resources' on Instance uuid a7fde225-dfe9-46d6-a12e-df2beab37b0c obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:20:22 compute-0 nova_compute[117413]: 2025-10-08 16:20:22.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:22.614 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfb6ba7b-50, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:20:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:22.614 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:20:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:22.615 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfb6ba7b-50, col_values=(('external_ids', {'iface-id': 'bc02923c-7f95-45ae-9ad1-1ed85859f940'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:20:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:22.615 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:20:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:22.617 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[861b839c-6a38-4a8d-994b-9d24985677af]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:22 compute-0 nova_compute[117413]: 2025-10-08 16:20:22.958 2 DEBUG nova.compute.manager [req-e22e759a-027e-4b61-8d3a-2fd5af02d76a req-cf8cb177-45a9-4246-badd-cd0c86245871 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Received event network-vif-unplugged-96506174-a98c-4a5a-ae80-848833d70dbb external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:20:22 compute-0 nova_compute[117413]: 2025-10-08 16:20:22.958 2 DEBUG oslo_concurrency.lockutils [req-e22e759a-027e-4b61-8d3a-2fd5af02d76a req-cf8cb177-45a9-4246-badd-cd0c86245871 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "a7fde225-dfe9-46d6-a12e-df2beab37b0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:22 compute-0 nova_compute[117413]: 2025-10-08 16:20:22.959 2 DEBUG oslo_concurrency.lockutils [req-e22e759a-027e-4b61-8d3a-2fd5af02d76a req-cf8cb177-45a9-4246-badd-cd0c86245871 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "a7fde225-dfe9-46d6-a12e-df2beab37b0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:22 compute-0 nova_compute[117413]: 2025-10-08 16:20:22.959 2 DEBUG oslo_concurrency.lockutils [req-e22e759a-027e-4b61-8d3a-2fd5af02d76a req-cf8cb177-45a9-4246-badd-cd0c86245871 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "a7fde225-dfe9-46d6-a12e-df2beab37b0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:22 compute-0 nova_compute[117413]: 2025-10-08 16:20:22.960 2 DEBUG nova.compute.manager [req-e22e759a-027e-4b61-8d3a-2fd5af02d76a req-cf8cb177-45a9-4246-badd-cd0c86245871 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] No waiting events found dispatching network-vif-unplugged-96506174-a98c-4a5a-ae80-848833d70dbb pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:20:22 compute-0 nova_compute[117413]: 2025-10-08 16:20:22.960 2 DEBUG nova.compute.manager [req-e22e759a-027e-4b61-8d3a-2fd5af02d76a req-cf8cb177-45a9-4246-badd-cd0c86245871 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Received event network-vif-unplugged-96506174-a98c-4a5a-ae80-848833d70dbb for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:20:23 compute-0 nova_compute[117413]: 2025-10-08 16:20:23.118 2 DEBUG nova.virt.libvirt.vif [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,compute_id=2,config_drive='True',created_at=2025-10-08T16:17:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1864209771',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1864209771',id=6,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:17:59Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36f986860cbf4338bf6afd8aa7b4d147',ramdisk_id='',reservation_id='r-1xu3zrnu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',clean_attempts='1',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteActionsViaActuator-898376163',owner_user_name='tempest-TestExecuteActionsViaActuator-898376163-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:19:45Z,user_data=None,user_id='723962be4e3d48efb441d80077ac4263',uuid=a7fde225-dfe9-46d6-a12e-df2beab37b0c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "96506174-a98c-4a5a-ae80-848833d70dbb", "address": "fa:16:3e:e8:2b:94", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96506174-a9", "ovs_interfaceid": "96506174-a98c-4a5a-ae80-848833d70dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:20:23 compute-0 nova_compute[117413]: 2025-10-08 16:20:23.119 2 DEBUG nova.network.os_vif_util [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converting VIF {"id": "96506174-a98c-4a5a-ae80-848833d70dbb", "address": "fa:16:3e:e8:2b:94", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96506174-a9", "ovs_interfaceid": "96506174-a98c-4a5a-ae80-848833d70dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:20:23 compute-0 nova_compute[117413]: 2025-10-08 16:20:23.120 2 DEBUG nova.network.os_vif_util [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e8:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=96506174-a98c-4a5a-ae80-848833d70dbb,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96506174-a9') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:20:23 compute-0 nova_compute[117413]: 2025-10-08 16:20:23.121 2 DEBUG os_vif [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=96506174-a98c-4a5a-ae80-848833d70dbb,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96506174-a9') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:20:23 compute-0 nova_compute[117413]: 2025-10-08 16:20:23.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:23 compute-0 nova_compute[117413]: 2025-10-08 16:20:23.124 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96506174-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:20:23 compute-0 nova_compute[117413]: 2025-10-08 16:20:23.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:23 compute-0 nova_compute[117413]: 2025-10-08 16:20:23.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:23 compute-0 nova_compute[117413]: 2025-10-08 16:20:23.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:23 compute-0 nova_compute[117413]: 2025-10-08 16:20:23.131 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=70bb9b39-eae5-4072-b5e0-348124d90fd8) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:20:23 compute-0 nova_compute[117413]: 2025-10-08 16:20:23.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:23 compute-0 nova_compute[117413]: 2025-10-08 16:20:23.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:23 compute-0 nova_compute[117413]: 2025-10-08 16:20:23.137 2 INFO os_vif [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:2b:94,bridge_name='br-int',has_traffic_filtering=True,id=96506174-a98c-4a5a-ae80-848833d70dbb,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96506174-a9')
Oct 08 16:20:23 compute-0 nova_compute[117413]: 2025-10-08 16:20:23.138 2 INFO nova.virt.libvirt.driver [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Deleting instance files /var/lib/nova/instances/a7fde225-dfe9-46d6-a12e-df2beab37b0c_del
Oct 08 16:20:23 compute-0 nova_compute[117413]: 2025-10-08 16:20:23.139 2 INFO nova.virt.libvirt.driver [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Deletion of /var/lib/nova/instances/a7fde225-dfe9-46d6-a12e-df2beab37b0c_del complete
Oct 08 16:20:23 compute-0 nova_compute[117413]: 2025-10-08 16:20:23.655 2 INFO nova.compute.manager [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Took 1.33 seconds to destroy the instance on the hypervisor.
Oct 08 16:20:23 compute-0 nova_compute[117413]: 2025-10-08 16:20:23.656 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:20:23 compute-0 nova_compute[117413]: 2025-10-08 16:20:23.656 2 DEBUG nova.compute.manager [-] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:20:23 compute-0 nova_compute[117413]: 2025-10-08 16:20:23.656 2 DEBUG nova.network.neutron [-] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:20:23 compute-0 nova_compute[117413]: 2025-10-08 16:20:23.657 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:20:23 compute-0 nova_compute[117413]: 2025-10-08 16:20:23.852 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:20:24 compute-0 nova_compute[117413]: 2025-10-08 16:20:24.109 2 DEBUG nova.compute.manager [req-b3e62ece-952b-47f1-a5d9-41b070414671 req-62c9bbfd-9665-4ad7-be6b-e24ff76afd8b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Received event network-vif-deleted-96506174-a98c-4a5a-ae80-848833d70dbb external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:20:24 compute-0 nova_compute[117413]: 2025-10-08 16:20:24.110 2 INFO nova.compute.manager [req-b3e62ece-952b-47f1-a5d9-41b070414671 req-62c9bbfd-9665-4ad7-be6b-e24ff76afd8b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Neutron deleted interface 96506174-a98c-4a5a-ae80-848833d70dbb; detaching it from the instance and deleting it from the info cache
Oct 08 16:20:24 compute-0 nova_compute[117413]: 2025-10-08 16:20:24.110 2 DEBUG nova.network.neutron [req-b3e62ece-952b-47f1-a5d9-41b070414671 req-62c9bbfd-9665-4ad7-be6b-e24ff76afd8b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:20:24 compute-0 nova_compute[117413]: 2025-10-08 16:20:24.570 2 DEBUG nova.network.neutron [-] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:20:24 compute-0 nova_compute[117413]: 2025-10-08 16:20:24.618 2 DEBUG nova.compute.manager [req-b3e62ece-952b-47f1-a5d9-41b070414671 req-62c9bbfd-9665-4ad7-be6b-e24ff76afd8b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Detach interface failed, port_id=96506174-a98c-4a5a-ae80-848833d70dbb, reason: Instance a7fde225-dfe9-46d6-a12e-df2beab37b0c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 08 16:20:25 compute-0 nova_compute[117413]: 2025-10-08 16:20:25.012 2 DEBUG nova.compute.manager [req-462e9abc-979c-4073-90e7-88a0f5b785d5 req-71e17ee4-f45d-4b9f-a5d1-af15ae219fe7 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Received event network-vif-unplugged-96506174-a98c-4a5a-ae80-848833d70dbb external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:20:25 compute-0 nova_compute[117413]: 2025-10-08 16:20:25.013 2 DEBUG oslo_concurrency.lockutils [req-462e9abc-979c-4073-90e7-88a0f5b785d5 req-71e17ee4-f45d-4b9f-a5d1-af15ae219fe7 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "a7fde225-dfe9-46d6-a12e-df2beab37b0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:25 compute-0 nova_compute[117413]: 2025-10-08 16:20:25.014 2 DEBUG oslo_concurrency.lockutils [req-462e9abc-979c-4073-90e7-88a0f5b785d5 req-71e17ee4-f45d-4b9f-a5d1-af15ae219fe7 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "a7fde225-dfe9-46d6-a12e-df2beab37b0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:25 compute-0 nova_compute[117413]: 2025-10-08 16:20:25.014 2 DEBUG oslo_concurrency.lockutils [req-462e9abc-979c-4073-90e7-88a0f5b785d5 req-71e17ee4-f45d-4b9f-a5d1-af15ae219fe7 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "a7fde225-dfe9-46d6-a12e-df2beab37b0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:25 compute-0 nova_compute[117413]: 2025-10-08 16:20:25.015 2 DEBUG nova.compute.manager [req-462e9abc-979c-4073-90e7-88a0f5b785d5 req-71e17ee4-f45d-4b9f-a5d1-af15ae219fe7 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] No waiting events found dispatching network-vif-unplugged-96506174-a98c-4a5a-ae80-848833d70dbb pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:20:25 compute-0 nova_compute[117413]: 2025-10-08 16:20:25.015 2 DEBUG nova.compute.manager [req-462e9abc-979c-4073-90e7-88a0f5b785d5 req-71e17ee4-f45d-4b9f-a5d1-af15ae219fe7 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Received event network-vif-unplugged-96506174-a98c-4a5a-ae80-848833d70dbb for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:20:25 compute-0 nova_compute[117413]: 2025-10-08 16:20:25.077 2 INFO nova.compute.manager [-] [instance: a7fde225-dfe9-46d6-a12e-df2beab37b0c] Took 1.42 seconds to deallocate network for instance.
Oct 08 16:20:25 compute-0 nova_compute[117413]: 2025-10-08 16:20:25.602 2 DEBUG oslo_concurrency.lockutils [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:25 compute-0 nova_compute[117413]: 2025-10-08 16:20:25.603 2 DEBUG oslo_concurrency.lockutils [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:25 compute-0 nova_compute[117413]: 2025-10-08 16:20:25.659 2 DEBUG nova.compute.provider_tree [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:20:25 compute-0 nova_compute[117413]: 2025-10-08 16:20:25.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:26 compute-0 nova_compute[117413]: 2025-10-08 16:20:26.169 2 DEBUG nova.scheduler.client.report [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:20:26 compute-0 nova_compute[117413]: 2025-10-08 16:20:26.682 2 DEBUG oslo_concurrency.lockutils [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.079s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:26 compute-0 nova_compute[117413]: 2025-10-08 16:20:26.706 2 INFO nova.scheduler.client.report [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Deleted allocations for instance a7fde225-dfe9-46d6-a12e-df2beab37b0c
Oct 08 16:20:27 compute-0 nova_compute[117413]: 2025-10-08 16:20:27.750 2 DEBUG oslo_concurrency.lockutils [None req-b5317d0b-6f72-40d1-a4d7-5d953ba6c5af 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "a7fde225-dfe9-46d6-a12e-df2beab37b0c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.954s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:28 compute-0 nova_compute[117413]: 2025-10-08 16:20:28.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:29 compute-0 nova_compute[117413]: 2025-10-08 16:20:29.649 2 DEBUG oslo_concurrency.lockutils [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:29 compute-0 nova_compute[117413]: 2025-10-08 16:20:29.650 2 DEBUG oslo_concurrency.lockutils [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:29 compute-0 nova_compute[117413]: 2025-10-08 16:20:29.650 2 DEBUG oslo_concurrency.lockutils [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:29 compute-0 nova_compute[117413]: 2025-10-08 16:20:29.651 2 DEBUG oslo_concurrency.lockutils [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:29 compute-0 nova_compute[117413]: 2025-10-08 16:20:29.651 2 DEBUG oslo_concurrency.lockutils [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:29 compute-0 nova_compute[117413]: 2025-10-08 16:20:29.666 2 INFO nova.compute.manager [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Terminating instance
Oct 08 16:20:29 compute-0 podman[127881]: time="2025-10-08T16:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:20:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:20:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3486 "" "Go-http-client/1.1"
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.185 2 DEBUG nova.compute.manager [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:20:30 compute-0 kernel: tap68bf22e3-50 (unregistering): left promiscuous mode
Oct 08 16:20:30 compute-0 NetworkManager[1034]: <info>  [1759940430.2162] device (tap68bf22e3-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:30 compute-0 ovn_controller[19768]: 2025-10-08T16:20:30Z|00084|binding|INFO|Releasing lport 68bf22e3-50d7-4692-9dab-6dd3ad5df0cd from this chassis (sb_readonly=0)
Oct 08 16:20:30 compute-0 ovn_controller[19768]: 2025-10-08T16:20:30Z|00085|binding|INFO|Setting lport 68bf22e3-50d7-4692-9dab-6dd3ad5df0cd down in Southbound
Oct 08 16:20:30 compute-0 ovn_controller[19768]: 2025-10-08T16:20:30Z|00086|binding|INFO|Removing iface tap68bf22e3-50 ovn-installed in OVS
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:30.238 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:9c:8c 10.100.0.9'], port_security=['fa:16:3e:16:9c:8c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0f6f8aa7-8a43-4471-afed-4203d5b80b4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36f986860cbf4338bf6afd8aa7b4d147', 'neutron:revision_number': '5', 'neutron:security_group_ids': '215a932b-a88a-4280-bc86-df394b56782c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c20bcb7-facc-40e4-a92a-7c3dfec236b2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=68bf22e3-50d7-4692-9dab-6dd3ad5df0cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:20:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:30.241 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 68bf22e3-50d7-4692-9dab-6dd3ad5df0cd in datapath cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b unbound from our chassis
Oct 08 16:20:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:30.243 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:20:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:30.244 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[2aaf79ec-0c34-4763-aba6-23837e9c2736]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:30.245 28633 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b namespace which is not needed anymore
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:30 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Deactivated successfully.
Oct 08 16:20:30 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Consumed 21.160s CPU time.
Oct 08 16:20:30 compute-0 systemd-machined[77548]: Machine qemu-2-instance-00000005 terminated.
Oct 08 16:20:30 compute-0 podman[143953]: 2025-10-08 16:20:30.329281584 +0000 UTC m=+0.070035821 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 08 16:20:30 compute-0 neutron-haproxy-ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b[142499]: [NOTICE]   (142503) : haproxy version is 3.0.5-8e879a5
Oct 08 16:20:30 compute-0 neutron-haproxy-ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b[142499]: [NOTICE]   (142503) : path to executable is /usr/sbin/haproxy
Oct 08 16:20:30 compute-0 neutron-haproxy-ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b[142499]: [WARNING]  (142503) : Exiting Master process...
Oct 08 16:20:30 compute-0 neutron-haproxy-ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b[142499]: [ALERT]    (142503) : Current worker (142505) exited with code 143 (Terminated)
Oct 08 16:20:30 compute-0 neutron-haproxy-ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b[142499]: [WARNING]  (142503) : All workers exited. Exiting... (0)
Oct 08 16:20:30 compute-0 podman[143992]: 2025-10-08 16:20:30.418189424 +0000 UTC m=+0.045365967 container kill d88d09dc2e514e32b0a169f9ade5cf0ee7f09f818c7e04fa6b99a37e873b23e6 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:20:30 compute-0 systemd[1]: libpod-d88d09dc2e514e32b0a169f9ade5cf0ee7f09f818c7e04fa6b99a37e873b23e6.scope: Deactivated successfully.
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.465 2 INFO nova.virt.libvirt.driver [-] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Instance destroyed successfully.
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.465 2 DEBUG nova.objects.instance [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lazy-loading 'resources' on Instance uuid 0f6f8aa7-8a43-4471-afed-4203d5b80b4c obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:20:30 compute-0 podman[144019]: 2025-10-08 16:20:30.472147675 +0000 UTC m=+0.033791526 container died d88d09dc2e514e32b0a169f9ade5cf0ee7f09f818c7e04fa6b99a37e873b23e6 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, tcib_managed=true)
Oct 08 16:20:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-104c62b0cd8a7b0f1fd4df48ed5247032f1a4708d7c7397457e218304bd75d33-merged.mount: Deactivated successfully.
Oct 08 16:20:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d88d09dc2e514e32b0a169f9ade5cf0ee7f09f818c7e04fa6b99a37e873b23e6-userdata-shm.mount: Deactivated successfully.
Oct 08 16:20:30 compute-0 podman[144019]: 2025-10-08 16:20:30.515325488 +0000 UTC m=+0.076969319 container cleanup d88d09dc2e514e32b0a169f9ade5cf0ee7f09f818c7e04fa6b99a37e873b23e6 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251007)
Oct 08 16:20:30 compute-0 systemd[1]: libpod-conmon-d88d09dc2e514e32b0a169f9ade5cf0ee7f09f818c7e04fa6b99a37e873b23e6.scope: Deactivated successfully.
Oct 08 16:20:30 compute-0 podman[144031]: 2025-10-08 16:20:30.534485005 +0000 UTC m=+0.068478347 container remove d88d09dc2e514e32b0a169f9ade5cf0ee7f09f818c7e04fa6b99a37e873b23e6 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 08 16:20:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:30.553 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[aca03a12-c518-45aa-97de-c8fbae349ead]: (4, ("Wed Oct  8 04:20:30 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b (d88d09dc2e514e32b0a169f9ade5cf0ee7f09f818c7e04fa6b99a37e873b23e6)\nd88d09dc2e514e32b0a169f9ade5cf0ee7f09f818c7e04fa6b99a37e873b23e6\nWed Oct  8 04:20:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b (d88d09dc2e514e32b0a169f9ade5cf0ee7f09f818c7e04fa6b99a37e873b23e6)\nd88d09dc2e514e32b0a169f9ade5cf0ee7f09f818c7e04fa6b99a37e873b23e6\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:30.555 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[8bcc55e0-d5bc-46f0-908a-b5a184cb3919]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:30.555 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:20:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:30.556 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[15f0bfea-41e8-4db6-8e99-23ebca737afe]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:30.557 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfb6ba7b-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:30 compute-0 kernel: tapcfb6ba7b-50: left promiscuous mode
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:30.589 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[5304cd40-b1e1-4457-821e-4dd5fd95c3f1]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:30.621 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[0b59e9fc-6874-4900-ae7a-149bdf29e451]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:30.622 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[efc5cd8b-36c1-4fa9-a907-8f9f3bb925ff]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:30.639 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[ed348486-8c2c-4c6b-ace9-96c8d1b16823]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 150988, 'reachable_time': 33632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 144064, 'error': None, 'target': 'ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:30.641 28777 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 08 16:20:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:30.641 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[3aa3a6c4-fa0c-461a-9b2d-c41117f9512e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:30 compute-0 systemd[1]: run-netns-ovnmeta\x2dcfb6ba7b\x2d59b1\x2d46c3\x2da12a\x2d33ee111ccb6b.mount: Deactivated successfully.
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.950 2 DEBUG nova.compute.manager [req-db02eeb6-a465-4972-82e8-d3f522d0f6ff req-a6fa1e47-ce32-4c14-870a-bb73a3e8180b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Received event network-vif-unplugged-68bf22e3-50d7-4692-9dab-6dd3ad5df0cd external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.950 2 DEBUG oslo_concurrency.lockutils [req-db02eeb6-a465-4972-82e8-d3f522d0f6ff req-a6fa1e47-ce32-4c14-870a-bb73a3e8180b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.951 2 DEBUG oslo_concurrency.lockutils [req-db02eeb6-a465-4972-82e8-d3f522d0f6ff req-a6fa1e47-ce32-4c14-870a-bb73a3e8180b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.951 2 DEBUG oslo_concurrency.lockutils [req-db02eeb6-a465-4972-82e8-d3f522d0f6ff req-a6fa1e47-ce32-4c14-870a-bb73a3e8180b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.951 2 DEBUG nova.compute.manager [req-db02eeb6-a465-4972-82e8-d3f522d0f6ff req-a6fa1e47-ce32-4c14-870a-bb73a3e8180b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] No waiting events found dispatching network-vif-unplugged-68bf22e3-50d7-4692-9dab-6dd3ad5df0cd pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.952 2 DEBUG nova.compute.manager [req-db02eeb6-a465-4972-82e8-d3f522d0f6ff req-a6fa1e47-ce32-4c14-870a-bb73a3e8180b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Received event network-vif-unplugged-68bf22e3-50d7-4692-9dab-6dd3ad5df0cd for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.973 2 DEBUG nova.virt.libvirt.vif [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-08T16:16:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1268526964',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1268526964',id=5,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:17:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36f986860cbf4338bf6afd8aa7b4d147',ramdisk_id='',reservation_id='r-uc10fk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteActionsViaActuator-898376163',owner_user_name='tempest-TestExecuteActionsViaActuator-898376163-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:17:03Z,user_data=None,user_id='723962be4e3d48efb441d80077ac4263',uuid=0f6f8aa7-8a43-4471-afed-4203d5b80b4c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68bf22e3-50d7-4692-9dab-6dd3ad5df0cd", "address": "fa:16:3e:16:9c:8c", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68bf22e3-50", "ovs_interfaceid": "68bf22e3-50d7-4692-9dab-6dd3ad5df0cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.974 2 DEBUG nova.network.os_vif_util [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converting VIF {"id": "68bf22e3-50d7-4692-9dab-6dd3ad5df0cd", "address": "fa:16:3e:16:9c:8c", "network": {"id": "cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-1284789583-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3644951a011848ec91c55840c9a66158", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68bf22e3-50", "ovs_interfaceid": "68bf22e3-50d7-4692-9dab-6dd3ad5df0cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.975 2 DEBUG nova.network.os_vif_util [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:9c:8c,bridge_name='br-int',has_traffic_filtering=True,id=68bf22e3-50d7-4692-9dab-6dd3ad5df0cd,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68bf22e3-50') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.976 2 DEBUG os_vif [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:9c:8c,bridge_name='br-int',has_traffic_filtering=True,id=68bf22e3-50d7-4692-9dab-6dd3ad5df0cd,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68bf22e3-50') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.979 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68bf22e3-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.985 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=c7f353c3-4431-42b5-8407-a3babbafa6ba) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.991 2 INFO os_vif [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:9c:8c,bridge_name='br-int',has_traffic_filtering=True,id=68bf22e3-50d7-4692-9dab-6dd3ad5df0cd,network=Network(cfb6ba7b-59b1-46c3-a12a-33ee111ccb6b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68bf22e3-50')
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.991 2 INFO nova.virt.libvirt.driver [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Deleting instance files /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c_del
Oct 08 16:20:30 compute-0 nova_compute[117413]: 2025-10-08 16:20:30.993 2 INFO nova.virt.libvirt.driver [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Deletion of /var/lib/nova/instances/0f6f8aa7-8a43-4471-afed-4203d5b80b4c_del complete
Oct 08 16:20:31 compute-0 openstack_network_exporter[130039]: ERROR   16:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:20:31 compute-0 openstack_network_exporter[130039]: ERROR   16:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:20:31 compute-0 openstack_network_exporter[130039]: ERROR   16:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:20:31 compute-0 openstack_network_exporter[130039]: ERROR   16:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:20:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:20:31 compute-0 openstack_network_exporter[130039]: ERROR   16:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:20:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:20:31 compute-0 nova_compute[117413]: 2025-10-08 16:20:31.507 2 INFO nova.compute.manager [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Took 1.32 seconds to destroy the instance on the hypervisor.
Oct 08 16:20:31 compute-0 nova_compute[117413]: 2025-10-08 16:20:31.508 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:20:31 compute-0 nova_compute[117413]: 2025-10-08 16:20:31.508 2 DEBUG nova.compute.manager [-] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:20:31 compute-0 nova_compute[117413]: 2025-10-08 16:20:31.508 2 DEBUG nova.network.neutron [-] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:20:31 compute-0 nova_compute[117413]: 2025-10-08 16:20:31.508 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:20:31 compute-0 nova_compute[117413]: 2025-10-08 16:20:31.713 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:20:32 compute-0 nova_compute[117413]: 2025-10-08 16:20:32.470 2 DEBUG nova.network.neutron [-] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:20:32 compute-0 nova_compute[117413]: 2025-10-08 16:20:32.979 2 INFO nova.compute.manager [-] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Took 1.47 seconds to deallocate network for instance.
Oct 08 16:20:33 compute-0 nova_compute[117413]: 2025-10-08 16:20:33.037 2 DEBUG nova.compute.manager [req-02ffc473-c1d9-47a2-9872-bf382f618d4a req-48413f34-9ecc-44b9-991f-7e0de9208f10 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Received event network-vif-unplugged-68bf22e3-50d7-4692-9dab-6dd3ad5df0cd external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:20:33 compute-0 nova_compute[117413]: 2025-10-08 16:20:33.038 2 DEBUG oslo_concurrency.lockutils [req-02ffc473-c1d9-47a2-9872-bf382f618d4a req-48413f34-9ecc-44b9-991f-7e0de9208f10 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:33 compute-0 nova_compute[117413]: 2025-10-08 16:20:33.038 2 DEBUG oslo_concurrency.lockutils [req-02ffc473-c1d9-47a2-9872-bf382f618d4a req-48413f34-9ecc-44b9-991f-7e0de9208f10 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:33 compute-0 nova_compute[117413]: 2025-10-08 16:20:33.039 2 DEBUG oslo_concurrency.lockutils [req-02ffc473-c1d9-47a2-9872-bf382f618d4a req-48413f34-9ecc-44b9-991f-7e0de9208f10 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:33 compute-0 nova_compute[117413]: 2025-10-08 16:20:33.039 2 DEBUG nova.compute.manager [req-02ffc473-c1d9-47a2-9872-bf382f618d4a req-48413f34-9ecc-44b9-991f-7e0de9208f10 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] No waiting events found dispatching network-vif-unplugged-68bf22e3-50d7-4692-9dab-6dd3ad5df0cd pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:20:33 compute-0 nova_compute[117413]: 2025-10-08 16:20:33.039 2 DEBUG nova.compute.manager [req-02ffc473-c1d9-47a2-9872-bf382f618d4a req-48413f34-9ecc-44b9-991f-7e0de9208f10 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Received event network-vif-unplugged-68bf22e3-50d7-4692-9dab-6dd3ad5df0cd for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:20:33 compute-0 nova_compute[117413]: 2025-10-08 16:20:33.040 2 DEBUG nova.compute.manager [req-02ffc473-c1d9-47a2-9872-bf382f618d4a req-48413f34-9ecc-44b9-991f-7e0de9208f10 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0f6f8aa7-8a43-4471-afed-4203d5b80b4c] Received event network-vif-deleted-68bf22e3-50d7-4692-9dab-6dd3ad5df0cd external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:20:33 compute-0 nova_compute[117413]: 2025-10-08 16:20:33.509 2 DEBUG oslo_concurrency.lockutils [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:33 compute-0 nova_compute[117413]: 2025-10-08 16:20:33.510 2 DEBUG oslo_concurrency.lockutils [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:33 compute-0 podman[144065]: 2025-10-08 16:20:33.521065291 +0000 UTC m=+0.118082834 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, release=1755695350, config_id=edpm, distribution-scope=public, vcs-type=git, version=9.6, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41)
Oct 08 16:20:33 compute-0 nova_compute[117413]: 2025-10-08 16:20:33.584 2 DEBUG nova.compute.provider_tree [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:20:34 compute-0 nova_compute[117413]: 2025-10-08 16:20:34.130 2 DEBUG nova.scheduler.client.report [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:20:34 compute-0 nova_compute[117413]: 2025-10-08 16:20:34.640 2 DEBUG oslo_concurrency.lockutils [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.131s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:34 compute-0 nova_compute[117413]: 2025-10-08 16:20:34.668 2 INFO nova.scheduler.client.report [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Deleted allocations for instance 0f6f8aa7-8a43-4471-afed-4203d5b80b4c
Oct 08 16:20:35 compute-0 nova_compute[117413]: 2025-10-08 16:20:35.704 2 DEBUG oslo_concurrency.lockutils [None req-3054843c-cbc7-42f3-9f14-cb93ebc7e95d 723962be4e3d48efb441d80077ac4263 36f986860cbf4338bf6afd8aa7b4d147 - - default default] Lock "0f6f8aa7-8a43-4471-afed-4203d5b80b4c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.054s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:35 compute-0 nova_compute[117413]: 2025-10-08 16:20:35.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:35 compute-0 nova_compute[117413]: 2025-10-08 16:20:35.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:40 compute-0 podman[144088]: 2025-10-08 16:20:40.481152528 +0000 UTC m=+0.082533378 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 08 16:20:40 compute-0 nova_compute[117413]: 2025-10-08 16:20:40.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:40 compute-0 nova_compute[117413]: 2025-10-08 16:20:40.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:41.894 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:41.894 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:41.895 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:45 compute-0 podman[144112]: 2025-10-08 16:20:45.47042648 +0000 UTC m=+0.069893248 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct 08 16:20:45 compute-0 nova_compute[117413]: 2025-10-08 16:20:45.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:45 compute-0 nova_compute[117413]: 2025-10-08 16:20:45.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:49 compute-0 nova_compute[117413]: 2025-10-08 16:20:49.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:50 compute-0 podman[144132]: 2025-10-08 16:20:50.499161069 +0000 UTC m=+0.095531099 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Oct 08 16:20:50 compute-0 podman[144131]: 2025-10-08 16:20:50.509592177 +0000 UTC m=+0.103295521 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:20:50 compute-0 nova_compute[117413]: 2025-10-08 16:20:50.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:50 compute-0 nova_compute[117413]: 2025-10-08 16:20:50.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:55 compute-0 nova_compute[117413]: 2025-10-08 16:20:55.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:20:55 compute-0 nova_compute[117413]: 2025-10-08 16:20:55.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:55 compute-0 nova_compute[117413]: 2025-10-08 16:20:55.880 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:55 compute-0 nova_compute[117413]: 2025-10-08 16:20:55.881 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:55 compute-0 nova_compute[117413]: 2025-10-08 16:20:55.882 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:55 compute-0 nova_compute[117413]: 2025-10-08 16:20:55.882 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:20:55 compute-0 nova_compute[117413]: 2025-10-08 16:20:55.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:20:56 compute-0 nova_compute[117413]: 2025-10-08 16:20:56.092 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:20:56 compute-0 nova_compute[117413]: 2025-10-08 16:20:56.093 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:20:56 compute-0 nova_compute[117413]: 2025-10-08 16:20:56.126 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.033s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:20:56 compute-0 nova_compute[117413]: 2025-10-08 16:20:56.127 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6186MB free_disk=73.26312255859375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:20:56 compute-0 nova_compute[117413]: 2025-10-08 16:20:56.127 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:20:56 compute-0 nova_compute[117413]: 2025-10-08 16:20:56.127 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:20:56 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:56.969 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:5a:c8 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e1544c6-45c6-42c6-9964-06578beca8d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eb01c7700fd44ff4815f4e0cd565314f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c3f740dd-3a2b-4559-b226-01ccdb01f473, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3fe1d362-1074-4501-8811-381dd4ad861d) old=Port_Binding(mac=['fa:16:3e:be:5a:c8'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e1544c6-45c6-42c6-9964-06578beca8d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eb01c7700fd44ff4815f4e0cd565314f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:20:56 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:56.970 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3fe1d362-1074-4501-8811-381dd4ad861d in datapath 1e1544c6-45c6-42c6-9964-06578beca8d2 updated
Oct 08 16:20:56 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:56.972 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1e1544c6-45c6-42c6-9964-06578beca8d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:20:56 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:20:56.973 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[cdebf85d-1c36-47f6-b04c-d15bec0cd8cf]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:20:57 compute-0 nova_compute[117413]: 2025-10-08 16:20:57.173 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:20:57 compute-0 nova_compute[117413]: 2025-10-08 16:20:57.174 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:20:56 up 29 min,  0 user,  load average: 0.35, 0.29, 0.29\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:20:57 compute-0 nova_compute[117413]: 2025-10-08 16:20:57.195 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:20:57 compute-0 nova_compute[117413]: 2025-10-08 16:20:57.703 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:20:58 compute-0 nova_compute[117413]: 2025-10-08 16:20:58.214 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:20:58 compute-0 nova_compute[117413]: 2025-10-08 16:20:58.215 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.087s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:20:59 compute-0 nova_compute[117413]: 2025-10-08 16:20:59.211 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:20:59 compute-0 nova_compute[117413]: 2025-10-08 16:20:59.211 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:20:59 compute-0 nova_compute[117413]: 2025-10-08 16:20:59.212 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:20:59 compute-0 nova_compute[117413]: 2025-10-08 16:20:59.212 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:20:59 compute-0 nova_compute[117413]: 2025-10-08 16:20:59.212 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:20:59 compute-0 podman[127881]: time="2025-10-08T16:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:20:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:20:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3021 "" "Go-http-client/1.1"
Oct 08 16:21:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:00.216 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:21:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:00.217 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:21:00 compute-0 nova_compute[117413]: 2025-10-08 16:21:00.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:00 compute-0 nova_compute[117413]: 2025-10-08 16:21:00.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:21:00 compute-0 podman[144184]: 2025-10-08 16:21:00.457663591 +0000 UTC m=+0.064627107 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.4)
Oct 08 16:21:00 compute-0 nova_compute[117413]: 2025-10-08 16:21:00.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:01 compute-0 nova_compute[117413]: 2025-10-08 16:21:01.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:01 compute-0 nova_compute[117413]: 2025-10-08 16:21:01.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:21:01 compute-0 nova_compute[117413]: 2025-10-08 16:21:01.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:21:01 compute-0 openstack_network_exporter[130039]: ERROR   16:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:21:01 compute-0 openstack_network_exporter[130039]: ERROR   16:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:21:01 compute-0 openstack_network_exporter[130039]: ERROR   16:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:21:01 compute-0 openstack_network_exporter[130039]: ERROR   16:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:21:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:21:01 compute-0 openstack_network_exporter[130039]: ERROR   16:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:21:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:21:04 compute-0 podman[144204]: 2025-10-08 16:21:04.483333242 +0000 UTC m=+0.080786458 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41)
Oct 08 16:21:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:04.960 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:96:8a 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-4d327c7e-50f1-4dd7-905b-94288e2c4c59', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d327c7e-50f1-4dd7-905b-94288e2c4c59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c9a1345c489441a8f545f201bb4a01a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bfd1bfa5-6caa-4855-8ef7-a44a6a591777, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c02beb72-d4b4-49fe-8e84-5c80b4df107c) old=Port_Binding(mac=['fa:16:3e:0b:96:8a'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-4d327c7e-50f1-4dd7-905b-94288e2c4c59', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d327c7e-50f1-4dd7-905b-94288e2c4c59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c9a1345c489441a8f545f201bb4a01a', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:21:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:04.961 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c02beb72-d4b4-49fe-8e84-5c80b4df107c in datapath 4d327c7e-50f1-4dd7-905b-94288e2c4c59 updated
Oct 08 16:21:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:04.962 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d327c7e-50f1-4dd7-905b-94288e2c4c59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:21:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:04.963 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[0b91ab76-64ed-4196-8256-13d8a9b86dfe]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:21:05 compute-0 nova_compute[117413]: 2025-10-08 16:21:05.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:06 compute-0 nova_compute[117413]: 2025-10-08 16:21:06.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:07 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:07.218 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:21:10 compute-0 nova_compute[117413]: 2025-10-08 16:21:10.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:11 compute-0 nova_compute[117413]: 2025-10-08 16:21:11.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:11 compute-0 podman[144228]: 2025-10-08 16:21:11.452590972 +0000 UTC m=+0.055595498 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Oct 08 16:21:15 compute-0 nova_compute[117413]: 2025-10-08 16:21:15.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:16 compute-0 nova_compute[117413]: 2025-10-08 16:21:16.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:16 compute-0 podman[144248]: 2025-10-08 16:21:16.493026386 +0000 UTC m=+0.083115614 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 08 16:21:20 compute-0 nova_compute[117413]: 2025-10-08 16:21:20.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:21 compute-0 nova_compute[117413]: 2025-10-08 16:21:21.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:21 compute-0 podman[144267]: 2025-10-08 16:21:21.480477497 +0000 UTC m=+0.080957314 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:21:21 compute-0 podman[144268]: 2025-10-08 16:21:21.491771029 +0000 UTC m=+0.088829688 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 08 16:21:21 compute-0 ovn_controller[19768]: 2025-10-08T16:21:21Z|00087|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 08 16:21:25 compute-0 nova_compute[117413]: 2025-10-08 16:21:25.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:26 compute-0 nova_compute[117413]: 2025-10-08 16:21:26.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:28 compute-0 nova_compute[117413]: 2025-10-08 16:21:28.592 2 DEBUG oslo_concurrency.lockutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Acquiring lock "f2872aac-1d69-4520-8f77-6fc89e222bbf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:21:28 compute-0 nova_compute[117413]: 2025-10-08 16:21:28.593 2 DEBUG oslo_concurrency.lockutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:21:29 compute-0 nova_compute[117413]: 2025-10-08 16:21:29.100 2 DEBUG nova.compute.manager [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 08 16:21:29 compute-0 nova_compute[117413]: 2025-10-08 16:21:29.652 2 DEBUG oslo_concurrency.lockutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:21:29 compute-0 nova_compute[117413]: 2025-10-08 16:21:29.652 2 DEBUG oslo_concurrency.lockutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:21:29 compute-0 nova_compute[117413]: 2025-10-08 16:21:29.659 2 DEBUG nova.virt.hardware [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 08 16:21:29 compute-0 nova_compute[117413]: 2025-10-08 16:21:29.659 2 INFO nova.compute.claims [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Claim successful on node compute-0.ctlplane.example.com
Oct 08 16:21:29 compute-0 podman[127881]: time="2025-10-08T16:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:21:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:21:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3023 "" "Go-http-client/1.1"
Oct 08 16:21:30 compute-0 nova_compute[117413]: 2025-10-08 16:21:30.713 2 DEBUG nova.compute.provider_tree [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:21:30 compute-0 nova_compute[117413]: 2025-10-08 16:21:30.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:31 compute-0 nova_compute[117413]: 2025-10-08 16:21:31.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:31 compute-0 nova_compute[117413]: 2025-10-08 16:21:31.220 2 DEBUG nova.scheduler.client.report [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:21:31 compute-0 openstack_network_exporter[130039]: ERROR   16:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:21:31 compute-0 openstack_network_exporter[130039]: ERROR   16:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:21:31 compute-0 openstack_network_exporter[130039]: ERROR   16:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:21:31 compute-0 openstack_network_exporter[130039]: ERROR   16:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:21:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:21:31 compute-0 openstack_network_exporter[130039]: ERROR   16:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:21:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:21:31 compute-0 podman[144315]: 2025-10-08 16:21:31.470032337 +0000 UTC m=+0.077588737 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 08 16:21:31 compute-0 nova_compute[117413]: 2025-10-08 16:21:31.730 2 DEBUG oslo_concurrency.lockutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.078s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:21:31 compute-0 nova_compute[117413]: 2025-10-08 16:21:31.731 2 DEBUG nova.compute.manager [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 08 16:21:32 compute-0 nova_compute[117413]: 2025-10-08 16:21:32.239 2 DEBUG nova.compute.manager [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 08 16:21:32 compute-0 nova_compute[117413]: 2025-10-08 16:21:32.240 2 DEBUG nova.network.neutron [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 08 16:21:32 compute-0 nova_compute[117413]: 2025-10-08 16:21:32.240 2 WARNING neutronclient.v2_0.client [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:21:32 compute-0 nova_compute[117413]: 2025-10-08 16:21:32.240 2 WARNING neutronclient.v2_0.client [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:21:32 compute-0 nova_compute[117413]: 2025-10-08 16:21:32.747 2 INFO nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 08 16:21:33 compute-0 nova_compute[117413]: 2025-10-08 16:21:33.249 2 DEBUG nova.network.neutron [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Successfully created port: 832861c7-4cbb-4865-bbf8-e006113d6965 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 08 16:21:33 compute-0 nova_compute[117413]: 2025-10-08 16:21:33.255 2 DEBUG nova.compute.manager [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.277 2 DEBUG nova.compute.manager [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.278 2 DEBUG nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.279 2 INFO nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Creating image(s)
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.279 2 DEBUG oslo_concurrency.lockutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Acquiring lock "/var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.280 2 DEBUG oslo_concurrency.lockutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Lock "/var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.280 2 DEBUG oslo_concurrency.lockutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Lock "/var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.281 2 DEBUG oslo_utils.imageutils.format_inspector [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.284 2 DEBUG oslo_utils.imageutils.format_inspector [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.285 2 DEBUG oslo_concurrency.processutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.351 2 DEBUG oslo_concurrency.processutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.352 2 DEBUG oslo_concurrency.lockutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.353 2 DEBUG oslo_concurrency.lockutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.353 2 DEBUG oslo_utils.imageutils.format_inspector [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.359 2 DEBUG oslo_utils.imageutils.format_inspector [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.359 2 DEBUG oslo_concurrency.processutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.422 2 DEBUG oslo_concurrency.processutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.424 2 DEBUG oslo_concurrency.processutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.466 2 DEBUG oslo_concurrency.processutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.468 2 DEBUG oslo_concurrency.lockutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.115s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.468 2 DEBUG oslo_concurrency.processutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.523 2 DEBUG oslo_concurrency.processutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.524 2 DEBUG nova.virt.disk.api [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Checking if we can resize image /var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.525 2 DEBUG oslo_concurrency.processutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.616 2 DEBUG oslo_concurrency.processutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.617 2 DEBUG nova.virt.disk.api [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Cannot resize image /var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.618 2 DEBUG nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.618 2 DEBUG nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Ensure instance console log exists: /var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.619 2 DEBUG oslo_concurrency.lockutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.619 2 DEBUG oslo_concurrency.lockutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.620 2 DEBUG oslo_concurrency.lockutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.889 2 DEBUG nova.network.neutron [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Successfully updated port: 832861c7-4cbb-4865-bbf8-e006113d6965 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.977 2 DEBUG nova.compute.manager [req-f3fb89dd-b496-4b22-b170-f432a79f7621 req-a387064f-f4d9-404e-ae7d-3c53c05f5024 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Received event network-changed-832861c7-4cbb-4865-bbf8-e006113d6965 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.977 2 DEBUG nova.compute.manager [req-f3fb89dd-b496-4b22-b170-f432a79f7621 req-a387064f-f4d9-404e-ae7d-3c53c05f5024 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Refreshing instance network info cache due to event network-changed-832861c7-4cbb-4865-bbf8-e006113d6965. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.978 2 DEBUG oslo_concurrency.lockutils [req-f3fb89dd-b496-4b22-b170-f432a79f7621 req-a387064f-f4d9-404e-ae7d-3c53c05f5024 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-f2872aac-1d69-4520-8f77-6fc89e222bbf" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.978 2 DEBUG oslo_concurrency.lockutils [req-f3fb89dd-b496-4b22-b170-f432a79f7621 req-a387064f-f4d9-404e-ae7d-3c53c05f5024 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-f2872aac-1d69-4520-8f77-6fc89e222bbf" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:21:34 compute-0 nova_compute[117413]: 2025-10-08 16:21:34.978 2 DEBUG nova.network.neutron [req-f3fb89dd-b496-4b22-b170-f432a79f7621 req-a387064f-f4d9-404e-ae7d-3c53c05f5024 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Refreshing network info cache for port 832861c7-4cbb-4865-bbf8-e006113d6965 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 08 16:21:35 compute-0 nova_compute[117413]: 2025-10-08 16:21:35.399 2 DEBUG oslo_concurrency.lockutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Acquiring lock "refresh_cache-f2872aac-1d69-4520-8f77-6fc89e222bbf" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:21:35 compute-0 podman[144350]: 2025-10-08 16:21:35.453315049 +0000 UTC m=+0.066367967 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_id=edpm, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter)
Oct 08 16:21:35 compute-0 nova_compute[117413]: 2025-10-08 16:21:35.483 2 WARNING neutronclient.v2_0.client [req-f3fb89dd-b496-4b22-b170-f432a79f7621 req-a387064f-f4d9-404e-ae7d-3c53c05f5024 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:21:35 compute-0 nova_compute[117413]: 2025-10-08 16:21:35.867 2 DEBUG nova.network.neutron [req-f3fb89dd-b496-4b22-b170-f432a79f7621 req-a387064f-f4d9-404e-ae7d-3c53c05f5024 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:21:35 compute-0 nova_compute[117413]: 2025-10-08 16:21:35.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:35 compute-0 nova_compute[117413]: 2025-10-08 16:21:35.991 2 DEBUG nova.network.neutron [req-f3fb89dd-b496-4b22-b170-f432a79f7621 req-a387064f-f4d9-404e-ae7d-3c53c05f5024 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:21:36 compute-0 nova_compute[117413]: 2025-10-08 16:21:36.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:36 compute-0 nova_compute[117413]: 2025-10-08 16:21:36.498 2 DEBUG oslo_concurrency.lockutils [req-f3fb89dd-b496-4b22-b170-f432a79f7621 req-a387064f-f4d9-404e-ae7d-3c53c05f5024 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-f2872aac-1d69-4520-8f77-6fc89e222bbf" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:21:36 compute-0 nova_compute[117413]: 2025-10-08 16:21:36.498 2 DEBUG oslo_concurrency.lockutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Acquired lock "refresh_cache-f2872aac-1d69-4520-8f77-6fc89e222bbf" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:21:36 compute-0 nova_compute[117413]: 2025-10-08 16:21:36.499 2 DEBUG nova.network.neutron [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:21:37 compute-0 nova_compute[117413]: 2025-10-08 16:21:37.144 2 DEBUG nova.network.neutron [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:21:37 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 08 16:21:37 compute-0 nova_compute[117413]: 2025-10-08 16:21:37.332 2 WARNING neutronclient.v2_0.client [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:21:37 compute-0 nova_compute[117413]: 2025-10-08 16:21:37.494 2 DEBUG nova.network.neutron [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Updating instance_info_cache with network_info: [{"id": "832861c7-4cbb-4865-bbf8-e006113d6965", "address": "fa:16:3e:15:29:0f", "network": {"id": "1e1544c6-45c6-42c6-9964-06578beca8d2", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1767326805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb01c7700fd44ff4815f4e0cd565314f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832861c7-4c", "ovs_interfaceid": "832861c7-4cbb-4865-bbf8-e006113d6965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.002 2 DEBUG oslo_concurrency.lockutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Releasing lock "refresh_cache-f2872aac-1d69-4520-8f77-6fc89e222bbf" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.002 2 DEBUG nova.compute.manager [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Instance network_info: |[{"id": "832861c7-4cbb-4865-bbf8-e006113d6965", "address": "fa:16:3e:15:29:0f", "network": {"id": "1e1544c6-45c6-42c6-9964-06578beca8d2", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1767326805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb01c7700fd44ff4815f4e0cd565314f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832861c7-4c", "ovs_interfaceid": "832861c7-4cbb-4865-bbf8-e006113d6965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.005 2 DEBUG nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Start _get_guest_xml network_info=[{"id": "832861c7-4cbb-4865-bbf8-e006113d6965", "address": "fa:16:3e:15:29:0f", "network": {"id": "1e1544c6-45c6-42c6-9964-06578beca8d2", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1767326805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb01c7700fd44ff4815f4e0cd565314f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832861c7-4c", "ovs_interfaceid": "832861c7-4cbb-4865-bbf8-e006113d6965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'boot_index': 0, 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': '44390e9d-4b05-4916-9ba9-97b19c79ef43'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.009 2 WARNING nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.011 2 DEBUG nova.virt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='44390e9d-4b05-4916-9ba9-97b19c79ef43', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteBasicStrategy-server-896091599', uuid='f2872aac-1d69-4520-8f77-6fc89e222bbf'), owner=OwnerMeta(userid='9c974ab1a4a14567b497ddc5498211f4', username='tempest-TestExecuteBasicStrategy-1126837627-project-admin', projectid='5c9a1345c489441a8f545f201bb4a01a', projectname='tempest-TestExecuteBasicStrategy-1126837627'), image=ImageMeta(id='44390e9d-4b05-4916-9ba9-97b19c79ef43', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='43cd5d45-bd07-4889-a671-dd23291090c1', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "832861c7-4cbb-4865-bbf8-e006113d6965", "address": "fa:16:3e:15:29:0f", "network": {"id": "1e1544c6-45c6-42c6-9964-06578beca8d2", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1767326805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb01c7700fd44ff4815f4e0cd565314f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832861c7-4c", "ovs_interfaceid": "832861c7-4cbb-4865-bbf8-e006113d6965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008114656.23cad1d.el10', creation_time=1759940498.0111387) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.015 2 DEBUG nova.virt.libvirt.host [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.015 2 DEBUG nova.virt.libvirt.host [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.017 2 DEBUG nova.virt.libvirt.host [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.018 2 DEBUG nova.virt.libvirt.host [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.018 2 DEBUG nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.019 2 DEBUG nova.virt.hardware [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-08T16:08:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='43cd5d45-bd07-4889-a671-dd23291090c1',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.019 2 DEBUG nova.virt.hardware [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.019 2 DEBUG nova.virt.hardware [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.019 2 DEBUG nova.virt.hardware [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.020 2 DEBUG nova.virt.hardware [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.020 2 DEBUG nova.virt.hardware [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.020 2 DEBUG nova.virt.hardware [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.020 2 DEBUG nova.virt.hardware [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.021 2 DEBUG nova.virt.hardware [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.021 2 DEBUG nova.virt.hardware [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.021 2 DEBUG nova.virt.hardware [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.026 2 DEBUG nova.virt.libvirt.vif [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:21:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteBasicStrategy-server-896091599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutebasicstrategy-server-896091599',id=11,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5c9a1345c489441a8f545f201bb4a01a',ramdisk_id='',reservation_id='r-sidf9kof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='manager,admin,member,reader',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteBasicStrategy-1126837627',owner_user_name='tempest-TestExecuteBasicStrategy-1126837627-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:21:33Z,user_data=None,user_id='9c974ab1a4a14567b497ddc5498211f4',uuid=f2872aac-1d69-4520-8f77-6fc89e222bbf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "832861c7-4cbb-4865-bbf8-e006113d6965", "address": "fa:16:3e:15:29:0f", "network": {"id": "1e1544c6-45c6-42c6-9964-06578beca8d2", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1767326805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb01c7700fd44ff4815f4e0cd565314f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832861c7-4c", "ovs_interfaceid": "832861c7-4cbb-4865-bbf8-e006113d6965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.027 2 DEBUG nova.network.os_vif_util [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Converting VIF {"id": "832861c7-4cbb-4865-bbf8-e006113d6965", "address": "fa:16:3e:15:29:0f", "network": {"id": "1e1544c6-45c6-42c6-9964-06578beca8d2", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1767326805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb01c7700fd44ff4815f4e0cd565314f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832861c7-4c", "ovs_interfaceid": "832861c7-4cbb-4865-bbf8-e006113d6965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.027 2 DEBUG nova.network.os_vif_util [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:29:0f,bridge_name='br-int',has_traffic_filtering=True,id=832861c7-4cbb-4865-bbf8-e006113d6965,network=Network(1e1544c6-45c6-42c6-9964-06578beca8d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap832861c7-4c') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.028 2 DEBUG nova.objects.instance [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Lazy-loading 'pci_devices' on Instance uuid f2872aac-1d69-4520-8f77-6fc89e222bbf obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.538 2 DEBUG nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] End _get_guest_xml xml=<domain type="kvm">
Oct 08 16:21:38 compute-0 nova_compute[117413]:   <uuid>f2872aac-1d69-4520-8f77-6fc89e222bbf</uuid>
Oct 08 16:21:38 compute-0 nova_compute[117413]:   <name>instance-0000000b</name>
Oct 08 16:21:38 compute-0 nova_compute[117413]:   <memory>131072</memory>
Oct 08 16:21:38 compute-0 nova_compute[117413]:   <vcpu>1</vcpu>
Oct 08 16:21:38 compute-0 nova_compute[117413]:   <metadata>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <nova:package version="32.1.0-0.20251008114656.23cad1d.el10"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <nova:name>tempest-TestExecuteBasicStrategy-server-896091599</nova:name>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <nova:creationTime>2025-10-08 16:21:38</nova:creationTime>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <nova:flavor name="m1.nano" id="43cd5d45-bd07-4889-a671-dd23291090c1">
Oct 08 16:21:38 compute-0 nova_compute[117413]:         <nova:memory>128</nova:memory>
Oct 08 16:21:38 compute-0 nova_compute[117413]:         <nova:disk>1</nova:disk>
Oct 08 16:21:38 compute-0 nova_compute[117413]:         <nova:swap>0</nova:swap>
Oct 08 16:21:38 compute-0 nova_compute[117413]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 16:21:38 compute-0 nova_compute[117413]:         <nova:vcpus>1</nova:vcpus>
Oct 08 16:21:38 compute-0 nova_compute[117413]:         <nova:extraSpecs>
Oct 08 16:21:38 compute-0 nova_compute[117413]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 08 16:21:38 compute-0 nova_compute[117413]:         </nova:extraSpecs>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       </nova:flavor>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <nova:image uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43">
Oct 08 16:21:38 compute-0 nova_compute[117413]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 08 16:21:38 compute-0 nova_compute[117413]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 08 16:21:38 compute-0 nova_compute[117413]:         <nova:minDisk>1</nova:minDisk>
Oct 08 16:21:38 compute-0 nova_compute[117413]:         <nova:minRam>0</nova:minRam>
Oct 08 16:21:38 compute-0 nova_compute[117413]:         <nova:properties>
Oct 08 16:21:38 compute-0 nova_compute[117413]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 08 16:21:38 compute-0 nova_compute[117413]:         </nova:properties>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       </nova:image>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <nova:owner>
Oct 08 16:21:38 compute-0 nova_compute[117413]:         <nova:user uuid="9c974ab1a4a14567b497ddc5498211f4">tempest-TestExecuteBasicStrategy-1126837627-project-admin</nova:user>
Oct 08 16:21:38 compute-0 nova_compute[117413]:         <nova:project uuid="5c9a1345c489441a8f545f201bb4a01a">tempest-TestExecuteBasicStrategy-1126837627</nova:project>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       </nova:owner>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <nova:root type="image" uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <nova:ports>
Oct 08 16:21:38 compute-0 nova_compute[117413]:         <nova:port uuid="832861c7-4cbb-4865-bbf8-e006113d6965">
Oct 08 16:21:38 compute-0 nova_compute[117413]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:         </nova:port>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       </nova:ports>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     </nova:instance>
Oct 08 16:21:38 compute-0 nova_compute[117413]:   </metadata>
Oct 08 16:21:38 compute-0 nova_compute[117413]:   <sysinfo type="smbios">
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <system>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <entry name="manufacturer">RDO</entry>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <entry name="product">OpenStack Compute</entry>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <entry name="version">32.1.0-0.20251008114656.23cad1d.el10</entry>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <entry name="serial">f2872aac-1d69-4520-8f77-6fc89e222bbf</entry>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <entry name="uuid">f2872aac-1d69-4520-8f77-6fc89e222bbf</entry>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <entry name="family">Virtual Machine</entry>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     </system>
Oct 08 16:21:38 compute-0 nova_compute[117413]:   </sysinfo>
Oct 08 16:21:38 compute-0 nova_compute[117413]:   <os>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <boot dev="hd"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <smbios mode="sysinfo"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:   </os>
Oct 08 16:21:38 compute-0 nova_compute[117413]:   <features>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <acpi/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <apic/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <vmcoreinfo/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:   </features>
Oct 08 16:21:38 compute-0 nova_compute[117413]:   <clock offset="utc">
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <timer name="hpet" present="no"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:   </clock>
Oct 08 16:21:38 compute-0 nova_compute[117413]:   <cpu mode="host-model" match="exact">
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <topology sockets="1" cores="1" threads="1"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:21:38 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <disk type="file" device="disk">
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <target dev="vda" bus="virtio"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <disk type="file" device="cdrom">
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <driver name="qemu" type="raw" cache="none"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk.config"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <target dev="sda" bus="sata"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <interface type="ethernet">
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <mac address="fa:16:3e:15:29:0f"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <driver name="vhost" rx_queue_size="512"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <mtu size="1442"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <target dev="tap832861c7-4c"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     </interface>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <serial type="pty">
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/console.log" append="off"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     </serial>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <video>
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     </video>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <input type="tablet" bus="usb"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <rng model="virtio">
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <backend model="random">/dev/urandom</backend>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <controller type="usb" index="0"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 08 16:21:38 compute-0 nova_compute[117413]:       <stats period="10"/>
Oct 08 16:21:38 compute-0 nova_compute[117413]:     </memballoon>
Oct 08 16:21:38 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:21:38 compute-0 nova_compute[117413]: </domain>
Oct 08 16:21:38 compute-0 nova_compute[117413]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.540 2 DEBUG nova.compute.manager [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Preparing to wait for external event network-vif-plugged-832861c7-4cbb-4865-bbf8-e006113d6965 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.540 2 DEBUG oslo_concurrency.lockutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Acquiring lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.541 2 DEBUG oslo_concurrency.lockutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.541 2 DEBUG oslo_concurrency.lockutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.543 2 DEBUG nova.virt.libvirt.vif [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:21:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteBasicStrategy-server-896091599',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutebasicstrategy-server-896091599',id=11,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5c9a1345c489441a8f545f201bb4a01a',ramdisk_id='',reservation_id='r-sidf9kof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='manager,admin,member,reader',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteBasicStrategy-1126837627',owner_user_name='tempest-TestExecuteBasicStrategy-1126837627-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:21:33Z,user_data=None,user_id='9c974ab1a4a14567b497ddc5498211f4',uuid=f2872aac-1d69-4520-8f77-6fc89e222bbf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "832861c7-4cbb-4865-bbf8-e006113d6965", "address": "fa:16:3e:15:29:0f", "network": {"id": "1e1544c6-45c6-42c6-9964-06578beca8d2", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1767326805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb01c7700fd44ff4815f4e0cd565314f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832861c7-4c", "ovs_interfaceid": "832861c7-4cbb-4865-bbf8-e006113d6965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.544 2 DEBUG nova.network.os_vif_util [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Converting VIF {"id": "832861c7-4cbb-4865-bbf8-e006113d6965", "address": "fa:16:3e:15:29:0f", "network": {"id": "1e1544c6-45c6-42c6-9964-06578beca8d2", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1767326805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb01c7700fd44ff4815f4e0cd565314f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832861c7-4c", "ovs_interfaceid": "832861c7-4cbb-4865-bbf8-e006113d6965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.545 2 DEBUG nova.network.os_vif_util [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:29:0f,bridge_name='br-int',has_traffic_filtering=True,id=832861c7-4cbb-4865-bbf8-e006113d6965,network=Network(1e1544c6-45c6-42c6-9964-06578beca8d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap832861c7-4c') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.546 2 DEBUG os_vif [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:29:0f,bridge_name='br-int',has_traffic_filtering=True,id=832861c7-4cbb-4865-bbf8-e006113d6965,network=Network(1e1544c6-45c6-42c6-9964-06578beca8d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap832861c7-4c') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.548 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.548 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.552 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'b1c4f087-4ed1-5c69-b902-51dd7ba60c9d', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.560 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap832861c7-4c, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.560 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap832861c7-4c, col_values=(('qos', UUID('57c78d7b-33be-4bb1-8615-070e99b07649')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.561 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap832861c7-4c, col_values=(('external_ids', {'iface-id': '832861c7-4cbb-4865-bbf8-e006113d6965', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:29:0f', 'vm-uuid': 'f2872aac-1d69-4520-8f77-6fc89e222bbf'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:38 compute-0 NetworkManager[1034]: <info>  [1759940498.5656] manager: (tap832861c7-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:38 compute-0 nova_compute[117413]: 2025-10-08 16:21:38.572 2 INFO os_vif [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:29:0f,bridge_name='br-int',has_traffic_filtering=True,id=832861c7-4cbb-4865-bbf8-e006113d6965,network=Network(1e1544c6-45c6-42c6-9964-06578beca8d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap832861c7-4c')
Oct 08 16:21:40 compute-0 nova_compute[117413]: 2025-10-08 16:21:40.146 2 DEBUG nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:21:40 compute-0 nova_compute[117413]: 2025-10-08 16:21:40.146 2 DEBUG nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:21:40 compute-0 nova_compute[117413]: 2025-10-08 16:21:40.147 2 DEBUG nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] No VIF found with MAC fa:16:3e:15:29:0f, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 08 16:21:40 compute-0 nova_compute[117413]: 2025-10-08 16:21:40.148 2 INFO nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Using config drive
Oct 08 16:21:40 compute-0 nova_compute[117413]: 2025-10-08 16:21:40.660 2 WARNING neutronclient.v2_0.client [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:21:40 compute-0 nova_compute[117413]: 2025-10-08 16:21:40.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:40 compute-0 nova_compute[117413]: 2025-10-08 16:21:40.935 2 INFO nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Creating config drive at /var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk.config
Oct 08 16:21:40 compute-0 nova_compute[117413]: 2025-10-08 16:21:40.946 2 DEBUG oslo_concurrency.processutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmpy57pisah execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:21:41 compute-0 nova_compute[117413]: 2025-10-08 16:21:41.091 2 DEBUG oslo_concurrency.processutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmpy57pisah" returned: 0 in 0.145s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:21:41 compute-0 kernel: tap832861c7-4c: entered promiscuous mode
Oct 08 16:21:41 compute-0 NetworkManager[1034]: <info>  [1759940501.1560] manager: (tap832861c7-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Oct 08 16:21:41 compute-0 ovn_controller[19768]: 2025-10-08T16:21:41Z|00088|binding|INFO|Claiming lport 832861c7-4cbb-4865-bbf8-e006113d6965 for this chassis.
Oct 08 16:21:41 compute-0 ovn_controller[19768]: 2025-10-08T16:21:41Z|00089|binding|INFO|832861c7-4cbb-4865-bbf8-e006113d6965: Claiming fa:16:3e:15:29:0f 10.100.0.13
Oct 08 16:21:41 compute-0 nova_compute[117413]: 2025-10-08 16:21:41.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:41 compute-0 nova_compute[117413]: 2025-10-08 16:21:41.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:41 compute-0 nova_compute[117413]: 2025-10-08 16:21:41.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.178 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:29:0f 10.100.0.13'], port_security=['fa:16:3e:15:29:0f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f2872aac-1d69-4520-8f77-6fc89e222bbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e1544c6-45c6-42c6-9964-06578beca8d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c9a1345c489441a8f545f201bb4a01a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0ba3c9eb-de8f-4b86-aefc-05e93c518d68', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c3f740dd-3a2b-4559-b226-01ccdb01f473, chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=832861c7-4cbb-4865-bbf8-e006113d6965) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.180 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 832861c7-4cbb-4865-bbf8-e006113d6965 in datapath 1e1544c6-45c6-42c6-9964-06578beca8d2 bound to our chassis
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.181 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1e1544c6-45c6-42c6-9964-06578beca8d2
Oct 08 16:21:41 compute-0 systemd-udevd[144391]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:21:41 compute-0 systemd-machined[77548]: New machine qemu-7-instance-0000000b.
Oct 08 16:21:41 compute-0 NetworkManager[1034]: <info>  [1759940501.1984] device (tap832861c7-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.197 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[78ea04e8-f488-4c77-af4c-73f420b5bb84]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.198 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1e1544c6-41 in ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 08 16:21:41 compute-0 NetworkManager[1034]: <info>  [1759940501.1992] device (tap832861c7-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.200 139805 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1e1544c6-40 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.200 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[4f53f53b-9ddc-416b-92a6-e0fa5710c966]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.201 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[1da029e2-73d6-4d28-acb8-7ba5c66dfb66]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:21:41 compute-0 nova_compute[117413]: 2025-10-08 16:21:41.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.215 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[a84f2c44-1c78-4be0-bf4b-2e75fd463019]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:21:41 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-0000000b.
Oct 08 16:21:41 compute-0 ovn_controller[19768]: 2025-10-08T16:21:41Z|00090|binding|INFO|Setting lport 832861c7-4cbb-4865-bbf8-e006113d6965 ovn-installed in OVS
Oct 08 16:21:41 compute-0 ovn_controller[19768]: 2025-10-08T16:21:41Z|00091|binding|INFO|Setting lport 832861c7-4cbb-4865-bbf8-e006113d6965 up in Southbound
Oct 08 16:21:41 compute-0 nova_compute[117413]: 2025-10-08 16:21:41.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.233 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[77925378-752e-4f75-9429-b9518243458d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.269 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[f598081f-7d42-4532-8f90-ee0d01841984]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.273 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[53ddc5fb-bbc8-4db1-b1de-cd99a57fd8c0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:21:41 compute-0 NetworkManager[1034]: <info>  [1759940501.2753] manager: (tap1e1544c6-40): new Veth device (/org/freedesktop/NetworkManager/Devices/40)
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.309 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[9f99ffdc-9f58-4a32-8473-d67a2b8f4b88]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.312 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[c417d40b-7a0a-4739-8467-09ccdd08b641]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:21:41 compute-0 NetworkManager[1034]: <info>  [1759940501.3369] device (tap1e1544c6-40): carrier: link connected
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.341 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce76f54-e1d9-4866-a54a-46999b6efe64]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.359 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[c6047018-2696-4799-88c6-5ed076f451d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e1544c6-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:5a:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 179002, 'reachable_time': 21277, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 144425, 'error': None, 'target': 'ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.381 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[7b1a9f86-aaae-4f54-9618-fde7d7d2b268]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febe:5ac8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 179002, 'tstamp': 179002}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 144426, 'error': None, 'target': 'ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.404 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[f7bc25a4-e0c6-4913-8923-eaba6dbe689c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1e1544c6-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:5a:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 179002, 'reachable_time': 21277, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 144427, 'error': None, 'target': 'ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.448 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[08b7e8f6-f033-4ac0-8bc1-938f851f4898]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.522 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[5a95124b-0cea-4253-a66d-6c19978e1a43]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.525 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e1544c6-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.526 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.528 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e1544c6-40, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:21:41 compute-0 nova_compute[117413]: 2025-10-08 16:21:41.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:41 compute-0 kernel: tap1e1544c6-40: entered promiscuous mode
Oct 08 16:21:41 compute-0 NetworkManager[1034]: <info>  [1759940501.5316] manager: (tap1e1544c6-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Oct 08 16:21:41 compute-0 nova_compute[117413]: 2025-10-08 16:21:41.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.534 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1e1544c6-40, col_values=(('external_ids', {'iface-id': '3fe1d362-1074-4501-8811-381dd4ad861d'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:21:41 compute-0 ovn_controller[19768]: 2025-10-08T16:21:41Z|00092|binding|INFO|Releasing lport 3fe1d362-1074-4501-8811-381dd4ad861d from this chassis (sb_readonly=0)
Oct 08 16:21:41 compute-0 nova_compute[117413]: 2025-10-08 16:21:41.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:41 compute-0 nova_compute[117413]: 2025-10-08 16:21:41.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.538 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[558155c0-278e-42aa-85e2-3a5776a32c14]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.539 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1e1544c6-45c6-42c6-9964-06578beca8d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1e1544c6-45c6-42c6-9964-06578beca8d2.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.540 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1e1544c6-45c6-42c6-9964-06578beca8d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1e1544c6-45c6-42c6-9964-06578beca8d2.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.540 28633 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 1e1544c6-45c6-42c6-9964-06578beca8d2 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.540 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1e1544c6-45c6-42c6-9964-06578beca8d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1e1544c6-45c6-42c6-9964-06578beca8d2.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.540 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6b2abc-ec39-4bdc-ba20-34992f585589]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.541 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1e1544c6-45c6-42c6-9964-06578beca8d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1e1544c6-45c6-42c6-9964-06578beca8d2.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.541 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[a89f7cba-a9c0-41ca-abb4-585f22b908c6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.541 28633 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: global
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     log         /dev/log local0 debug
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     log-tag     haproxy-metadata-proxy-1e1544c6-45c6-42c6-9964-06578beca8d2
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     user        root
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     group       root
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     maxconn     1024
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     pidfile     /var/lib/neutron/external/pids/1e1544c6-45c6-42c6-9964-06578beca8d2.pid.haproxy
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     daemon
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: defaults
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     log global
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     mode http
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     option httplog
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     option dontlognull
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     option http-server-close
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     option forwardfor
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     retries                 3
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     timeout http-request    30s
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     timeout connect         30s
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     timeout client          32s
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     timeout server          32s
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     timeout http-keep-alive 30s
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: listen listener
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     bind 169.254.169.254:80
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     server metadata /var/lib/neutron/metadata_proxy
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:     http-request add-header X-OVN-Network-ID 1e1544c6-45c6-42c6-9964-06578beca8d2
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.542 28633 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2', 'env', 'PROCESS_TAG=haproxy-1e1544c6-45c6-42c6-9964-06578beca8d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1e1544c6-45c6-42c6-9964-06578beca8d2.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 08 16:21:41 compute-0 nova_compute[117413]: 2025-10-08 16:21:41.544 2 DEBUG nova.compute.manager [req-47e30780-4b1b-4e18-b258-a4f3df7b2157 req-a2197bb2-4529-4961-b195-a4345643742a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Received event network-vif-plugged-832861c7-4cbb-4865-bbf8-e006113d6965 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:21:41 compute-0 nova_compute[117413]: 2025-10-08 16:21:41.544 2 DEBUG oslo_concurrency.lockutils [req-47e30780-4b1b-4e18-b258-a4f3df7b2157 req-a2197bb2-4529-4961-b195-a4345643742a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:21:41 compute-0 nova_compute[117413]: 2025-10-08 16:21:41.545 2 DEBUG oslo_concurrency.lockutils [req-47e30780-4b1b-4e18-b258-a4f3df7b2157 req-a2197bb2-4529-4961-b195-a4345643742a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:21:41 compute-0 nova_compute[117413]: 2025-10-08 16:21:41.545 2 DEBUG oslo_concurrency.lockutils [req-47e30780-4b1b-4e18-b258-a4f3df7b2157 req-a2197bb2-4529-4961-b195-a4345643742a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:21:41 compute-0 nova_compute[117413]: 2025-10-08 16:21:41.546 2 DEBUG nova.compute.manager [req-47e30780-4b1b-4e18-b258-a4f3df7b2157 req-a2197bb2-4529-4961-b195-a4345643742a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Processing event network-vif-plugged-832861c7-4cbb-4865-bbf8-e006113d6965 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 08 16:21:41 compute-0 nova_compute[117413]: 2025-10-08 16:21:41.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.895 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.896 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:21:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:21:41.897 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:21:41 compute-0 podman[144466]: 2025-10-08 16:21:41.970307353 +0000 UTC m=+0.063347160 container create b44f92b1e8008982fba68153ac4cd84eed75249f9d5e684e2237ef9f58e24415 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Oct 08 16:21:42 compute-0 systemd[1]: Started libpod-conmon-b44f92b1e8008982fba68153ac4cd84eed75249f9d5e684e2237ef9f58e24415.scope.
Oct 08 16:21:42 compute-0 nova_compute[117413]: 2025-10-08 16:21:42.013 2 DEBUG nova.compute.manager [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 08 16:21:42 compute-0 nova_compute[117413]: 2025-10-08 16:21:42.017 2 DEBUG nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 08 16:21:42 compute-0 nova_compute[117413]: 2025-10-08 16:21:42.021 2 INFO nova.virt.libvirt.driver [-] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Instance spawned successfully.
Oct 08 16:21:42 compute-0 nova_compute[117413]: 2025-10-08 16:21:42.021 2 DEBUG nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 08 16:21:42 compute-0 podman[144466]: 2025-10-08 16:21:41.939629237 +0000 UTC m=+0.032669044 image pull 1b705be0a2473f9551d4f3571c1e8fc1b0bd84e013684239de53078e70a4b6e3 38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 08 16:21:42 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:21:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f84fcfd447c0ead69446ab6380b58a1c47065bc2757635b90c998d51b080781/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 08 16:21:42 compute-0 podman[144466]: 2025-10-08 16:21:42.059422488 +0000 UTC m=+0.152462295 container init b44f92b1e8008982fba68153ac4cd84eed75249f9d5e684e2237ef9f58e24415 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20251007)
Oct 08 16:21:42 compute-0 podman[144466]: 2025-10-08 16:21:42.06684088 +0000 UTC m=+0.159880677 container start b44f92b1e8008982fba68153ac4cd84eed75249f9d5e684e2237ef9f58e24415 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2)
Oct 08 16:21:42 compute-0 neutron-haproxy-ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2[144483]: [NOTICE]   (144502) : New worker (144506) forked
Oct 08 16:21:42 compute-0 neutron-haproxy-ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2[144483]: [NOTICE]   (144502) : Loading success.
Oct 08 16:21:42 compute-0 podman[144480]: 2025-10-08 16:21:42.113303447 +0000 UTC m=+0.094685615 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:21:42 compute-0 nova_compute[117413]: 2025-10-08 16:21:42.535 2 DEBUG nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:21:42 compute-0 nova_compute[117413]: 2025-10-08 16:21:42.536 2 DEBUG nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:21:42 compute-0 nova_compute[117413]: 2025-10-08 16:21:42.536 2 DEBUG nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:21:42 compute-0 nova_compute[117413]: 2025-10-08 16:21:42.536 2 DEBUG nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:21:42 compute-0 nova_compute[117413]: 2025-10-08 16:21:42.536 2 DEBUG nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:21:42 compute-0 nova_compute[117413]: 2025-10-08 16:21:42.537 2 DEBUG nova.virt.libvirt.driver [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:21:43 compute-0 nova_compute[117413]: 2025-10-08 16:21:43.046 2 INFO nova.compute.manager [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Took 8.77 seconds to spawn the instance on the hypervisor.
Oct 08 16:21:43 compute-0 nova_compute[117413]: 2025-10-08 16:21:43.048 2 DEBUG nova.compute.manager [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:21:43 compute-0 nova_compute[117413]: 2025-10-08 16:21:43.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:43 compute-0 nova_compute[117413]: 2025-10-08 16:21:43.588 2 INFO nova.compute.manager [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Took 13.98 seconds to build instance.
Oct 08 16:21:44 compute-0 nova_compute[117413]: 2025-10-08 16:21:44.051 2 DEBUG nova.compute.manager [req-dfc5670a-6864-4c31-89d6-ce415f64736e req-26362165-23c6-4625-a6cb-02b589069c8b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Received event network-vif-plugged-832861c7-4cbb-4865-bbf8-e006113d6965 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:21:44 compute-0 nova_compute[117413]: 2025-10-08 16:21:44.052 2 DEBUG oslo_concurrency.lockutils [req-dfc5670a-6864-4c31-89d6-ce415f64736e req-26362165-23c6-4625-a6cb-02b589069c8b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:21:44 compute-0 nova_compute[117413]: 2025-10-08 16:21:44.052 2 DEBUG oslo_concurrency.lockutils [req-dfc5670a-6864-4c31-89d6-ce415f64736e req-26362165-23c6-4625-a6cb-02b589069c8b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:21:44 compute-0 nova_compute[117413]: 2025-10-08 16:21:44.053 2 DEBUG oslo_concurrency.lockutils [req-dfc5670a-6864-4c31-89d6-ce415f64736e req-26362165-23c6-4625-a6cb-02b589069c8b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:21:44 compute-0 nova_compute[117413]: 2025-10-08 16:21:44.053 2 DEBUG nova.compute.manager [req-dfc5670a-6864-4c31-89d6-ce415f64736e req-26362165-23c6-4625-a6cb-02b589069c8b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] No waiting events found dispatching network-vif-plugged-832861c7-4cbb-4865-bbf8-e006113d6965 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:21:44 compute-0 nova_compute[117413]: 2025-10-08 16:21:44.054 2 WARNING nova.compute.manager [req-dfc5670a-6864-4c31-89d6-ce415f64736e req-26362165-23c6-4625-a6cb-02b589069c8b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Received unexpected event network-vif-plugged-832861c7-4cbb-4865-bbf8-e006113d6965 for instance with vm_state active and task_state None.
Oct 08 16:21:44 compute-0 nova_compute[117413]: 2025-10-08 16:21:44.095 2 DEBUG oslo_concurrency.lockutils [None req-c06d1fb3-1a20-466a-8113-101be5991fcd 9c974ab1a4a14567b497ddc5498211f4 5c9a1345c489441a8f545f201bb4a01a - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.502s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:21:45 compute-0 nova_compute[117413]: 2025-10-08 16:21:45.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:47 compute-0 podman[144517]: 2025-10-08 16:21:47.446014787 +0000 UTC m=+0.056845004 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.4)
Oct 08 16:21:48 compute-0 nova_compute[117413]: 2025-10-08 16:21:48.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:50 compute-0 nova_compute[117413]: 2025-10-08 16:21:50.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:52 compute-0 podman[144537]: 2025-10-08 16:21:52.47098306 +0000 UTC m=+0.062749903 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 08 16:21:52 compute-0 podman[144538]: 2025-10-08 16:21:52.513218956 +0000 UTC m=+0.100675706 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:21:53 compute-0 nova_compute[117413]: 2025-10-08 16:21:53.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:54 compute-0 ovn_controller[19768]: 2025-10-08T16:21:54Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:15:29:0f 10.100.0.13
Oct 08 16:21:54 compute-0 ovn_controller[19768]: 2025-10-08T16:21:54Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:29:0f 10.100.0.13
Oct 08 16:21:55 compute-0 nova_compute[117413]: 2025-10-08 16:21:55.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:57 compute-0 nova_compute[117413]: 2025-10-08 16:21:57.357 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:21:57 compute-0 nova_compute[117413]: 2025-10-08 16:21:57.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:21:57 compute-0 nova_compute[117413]: 2025-10-08 16:21:57.878 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:21:57 compute-0 nova_compute[117413]: 2025-10-08 16:21:57.879 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:21:57 compute-0 nova_compute[117413]: 2025-10-08 16:21:57.879 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:21:57 compute-0 nova_compute[117413]: 2025-10-08 16:21:57.880 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:21:58 compute-0 nova_compute[117413]: 2025-10-08 16:21:58.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:21:58 compute-0 nova_compute[117413]: 2025-10-08 16:21:58.938 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:21:59 compute-0 nova_compute[117413]: 2025-10-08 16:21:59.000 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:21:59 compute-0 nova_compute[117413]: 2025-10-08 16:21:59.001 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:21:59 compute-0 nova_compute[117413]: 2025-10-08 16:21:59.068 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:21:59 compute-0 nova_compute[117413]: 2025-10-08 16:21:59.224 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:21:59 compute-0 nova_compute[117413]: 2025-10-08 16:21:59.226 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:21:59 compute-0 nova_compute[117413]: 2025-10-08 16:21:59.247 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.022s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:21:59 compute-0 nova_compute[117413]: 2025-10-08 16:21:59.248 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5961MB free_disk=73.23396301269531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:21:59 compute-0 nova_compute[117413]: 2025-10-08 16:21:59.249 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:21:59 compute-0 nova_compute[117413]: 2025-10-08 16:21:59.249 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:21:59 compute-0 podman[127881]: time="2025-10-08T16:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:21:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:21:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3479 "" "Go-http-client/1.1"
Oct 08 16:22:00 compute-0 nova_compute[117413]: 2025-10-08 16:22:00.297 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance f2872aac-1d69-4520-8f77-6fc89e222bbf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:22:00 compute-0 nova_compute[117413]: 2025-10-08 16:22:00.298 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:22:00 compute-0 nova_compute[117413]: 2025-10-08 16:22:00.298 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:21:59 up 30 min,  0 user,  load average: 0.32, 0.28, 0.29\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_5c9a1345c489441a8f545f201bb4a01a': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:22:00 compute-0 nova_compute[117413]: 2025-10-08 16:22:00.332 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:22:00 compute-0 nova_compute[117413]: 2025-10-08 16:22:00.838 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:22:00 compute-0 nova_compute[117413]: 2025-10-08 16:22:00.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:01 compute-0 nova_compute[117413]: 2025-10-08 16:22:01.350 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:22:01 compute-0 nova_compute[117413]: 2025-10-08 16:22:01.350 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.101s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:22:01 compute-0 openstack_network_exporter[130039]: ERROR   16:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:22:01 compute-0 openstack_network_exporter[130039]: ERROR   16:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:22:01 compute-0 openstack_network_exporter[130039]: ERROR   16:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:22:01 compute-0 openstack_network_exporter[130039]: ERROR   16:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:22:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:22:01 compute-0 openstack_network_exporter[130039]: ERROR   16:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:22:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:22:02 compute-0 nova_compute[117413]: 2025-10-08 16:22:02.350 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:22:02 compute-0 nova_compute[117413]: 2025-10-08 16:22:02.351 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:22:02 compute-0 nova_compute[117413]: 2025-10-08 16:22:02.351 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:22:02 compute-0 nova_compute[117413]: 2025-10-08 16:22:02.351 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:22:02 compute-0 nova_compute[117413]: 2025-10-08 16:22:02.351 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:22:02 compute-0 nova_compute[117413]: 2025-10-08 16:22:02.352 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:22:02 compute-0 nova_compute[117413]: 2025-10-08 16:22:02.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:22:02 compute-0 podman[144612]: 2025-10-08 16:22:02.485055658 +0000 UTC m=+0.090212198 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct 08 16:22:03 compute-0 nova_compute[117413]: 2025-10-08 16:22:03.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:05 compute-0 nova_compute[117413]: 2025-10-08 16:22:05.358 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:22:05 compute-0 nova_compute[117413]: 2025-10-08 16:22:05.411 2 DEBUG nova.compute.manager [None req-f1c62769-92bd-4604-ad68-1991cdd2a843 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Adding trait COMPUTE_STATUS_DISABLED to compute node resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 in placement. update_compute_provider_status /usr/lib/python3.12/site-packages/nova/compute/manager.py:635
Oct 08 16:22:05 compute-0 nova_compute[117413]: 2025-10-08 16:22:05.460 2 DEBUG nova.compute.provider_tree [None req-f1c62769-92bd-4604-ad68-1991cdd2a843 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Updating resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 generation from 9 to 11 during operation: update_traits _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Oct 08 16:22:05 compute-0 nova_compute[117413]: 2025-10-08 16:22:05.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:06 compute-0 podman[144633]: 2025-10-08 16:22:06.469718148 +0000 UTC m=+0.072057078 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release=1755695350, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.6, distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 08 16:22:08 compute-0 nova_compute[117413]: 2025-10-08 16:22:08.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:10 compute-0 nova_compute[117413]: 2025-10-08 16:22:10.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:11 compute-0 ovn_controller[19768]: 2025-10-08T16:22:11Z|00093|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Oct 08 16:22:12 compute-0 nova_compute[117413]: 2025-10-08 16:22:12.310 2 DEBUG nova.virt.libvirt.driver [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Check if temp file /var/lib/nova/instances/tmpr418ybnk exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Oct 08 16:22:12 compute-0 nova_compute[117413]: 2025-10-08 16:22:12.317 2 DEBUG nova.compute.manager [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpr418ybnk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='f2872aac-1d69-4520-8f77-6fc89e222bbf',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Oct 08 16:22:12 compute-0 podman[144656]: 2025-10-08 16:22:12.483856468 +0000 UTC m=+0.077118673 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 08 16:22:13 compute-0 nova_compute[117413]: 2025-10-08 16:22:13.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:14 compute-0 unix_chkpwd[144679]: password check failed for user (root)
Oct 08 16:22:14 compute-0 sshd-session[144677]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 08 16:22:15 compute-0 nova_compute[117413]: 2025-10-08 16:22:15.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:16 compute-0 sshd-session[144677]: Failed password for root from 80.94.93.176 port 52758 ssh2
Oct 08 16:22:16 compute-0 unix_chkpwd[144680]: password check failed for user (root)
Oct 08 16:22:17 compute-0 nova_compute[117413]: 2025-10-08 16:22:17.191 2 DEBUG oslo_concurrency.processutils [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:22:17 compute-0 nova_compute[117413]: 2025-10-08 16:22:17.270 2 DEBUG oslo_concurrency.processutils [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:22:17 compute-0 nova_compute[117413]: 2025-10-08 16:22:17.272 2 DEBUG oslo_concurrency.processutils [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:22:17 compute-0 nova_compute[117413]: 2025-10-08 16:22:17.343 2 DEBUG oslo_concurrency.processutils [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:22:17 compute-0 nova_compute[117413]: 2025-10-08 16:22:17.345 2 DEBUG nova.compute.manager [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Preparing to wait for external event network-vif-plugged-832861c7-4cbb-4865-bbf8-e006113d6965 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 08 16:22:17 compute-0 nova_compute[117413]: 2025-10-08 16:22:17.345 2 DEBUG oslo_concurrency.lockutils [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:22:17 compute-0 nova_compute[117413]: 2025-10-08 16:22:17.345 2 DEBUG oslo_concurrency.lockutils [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:22:17 compute-0 nova_compute[117413]: 2025-10-08 16:22:17.345 2 DEBUG oslo_concurrency.lockutils [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:22:18 compute-0 podman[144687]: 2025-10-08 16:22:18.460780096 +0000 UTC m=+0.052887289 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251007, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:22:18 compute-0 nova_compute[117413]: 2025-10-08 16:22:18.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:18 compute-0 sshd-session[144677]: Failed password for root from 80.94.93.176 port 52758 ssh2
Oct 08 16:22:20 compute-0 unix_chkpwd[144706]: password check failed for user (root)
Oct 08 16:22:20 compute-0 nova_compute[117413]: 2025-10-08 16:22:20.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:22 compute-0 sshd-session[144677]: Failed password for root from 80.94.93.176 port 52758 ssh2
Oct 08 16:22:22 compute-0 sshd-session[144677]: Received disconnect from 80.94.93.176 port 52758:11:  [preauth]
Oct 08 16:22:22 compute-0 sshd-session[144677]: Disconnected from authenticating user root 80.94.93.176 port 52758 [preauth]
Oct 08 16:22:22 compute-0 sshd-session[144677]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 08 16:22:23 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:23.226 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:22:23 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:23.226 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:22:23 compute-0 nova_compute[117413]: 2025-10-08 16:22:23.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:23 compute-0 nova_compute[117413]: 2025-10-08 16:22:23.243 2 DEBUG nova.compute.manager [req-2393779f-da75-421b-a2a5-3ff4e214f46b req-46114cfc-d3dd-4dee-a745-c2ba05bafdb4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Received event network-vif-unplugged-832861c7-4cbb-4865-bbf8-e006113d6965 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:22:23 compute-0 nova_compute[117413]: 2025-10-08 16:22:23.244 2 DEBUG oslo_concurrency.lockutils [req-2393779f-da75-421b-a2a5-3ff4e214f46b req-46114cfc-d3dd-4dee-a745-c2ba05bafdb4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:22:23 compute-0 nova_compute[117413]: 2025-10-08 16:22:23.245 2 DEBUG oslo_concurrency.lockutils [req-2393779f-da75-421b-a2a5-3ff4e214f46b req-46114cfc-d3dd-4dee-a745-c2ba05bafdb4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:22:23 compute-0 nova_compute[117413]: 2025-10-08 16:22:23.245 2 DEBUG oslo_concurrency.lockutils [req-2393779f-da75-421b-a2a5-3ff4e214f46b req-46114cfc-d3dd-4dee-a745-c2ba05bafdb4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:22:23 compute-0 nova_compute[117413]: 2025-10-08 16:22:23.246 2 DEBUG nova.compute.manager [req-2393779f-da75-421b-a2a5-3ff4e214f46b req-46114cfc-d3dd-4dee-a745-c2ba05bafdb4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] No event matching network-vif-unplugged-832861c7-4cbb-4865-bbf8-e006113d6965 in dict_keys([('network-vif-plugged', '832861c7-4cbb-4865-bbf8-e006113d6965')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Oct 08 16:22:23 compute-0 nova_compute[117413]: 2025-10-08 16:22:23.246 2 DEBUG nova.compute.manager [req-2393779f-da75-421b-a2a5-3ff4e214f46b req-46114cfc-d3dd-4dee-a745-c2ba05bafdb4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Received event network-vif-unplugged-832861c7-4cbb-4865-bbf8-e006113d6965 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:22:23 compute-0 unix_chkpwd[144710]: password check failed for user (root)
Oct 08 16:22:23 compute-0 sshd-session[144707]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 08 16:22:23 compute-0 podman[144711]: 2025-10-08 16:22:23.477538136 +0000 UTC m=+0.073421098 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 08 16:22:23 compute-0 podman[144712]: 2025-10-08 16:22:23.522771325 +0000 UTC m=+0.111469771 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251007)
Oct 08 16:22:23 compute-0 nova_compute[117413]: 2025-10-08 16:22:23.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:24 compute-0 nova_compute[117413]: 2025-10-08 16:22:24.365 2 INFO nova.compute.manager [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Took 7.02 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Oct 08 16:22:24 compute-0 sshd-session[144707]: Failed password for root from 80.94.93.176 port 12138 ssh2
Oct 08 16:22:25 compute-0 unix_chkpwd[144760]: password check failed for user (root)
Oct 08 16:22:25 compute-0 nova_compute[117413]: 2025-10-08 16:22:25.294 2 DEBUG nova.compute.manager [req-e4823f51-46fe-487b-9811-65f0db68aba3 req-d595442b-8ee3-4996-a093-5ff401be3539 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Received event network-vif-plugged-832861c7-4cbb-4865-bbf8-e006113d6965 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:22:25 compute-0 nova_compute[117413]: 2025-10-08 16:22:25.294 2 DEBUG oslo_concurrency.lockutils [req-e4823f51-46fe-487b-9811-65f0db68aba3 req-d595442b-8ee3-4996-a093-5ff401be3539 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:22:25 compute-0 nova_compute[117413]: 2025-10-08 16:22:25.295 2 DEBUG oslo_concurrency.lockutils [req-e4823f51-46fe-487b-9811-65f0db68aba3 req-d595442b-8ee3-4996-a093-5ff401be3539 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:22:25 compute-0 nova_compute[117413]: 2025-10-08 16:22:25.296 2 DEBUG oslo_concurrency.lockutils [req-e4823f51-46fe-487b-9811-65f0db68aba3 req-d595442b-8ee3-4996-a093-5ff401be3539 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:22:25 compute-0 nova_compute[117413]: 2025-10-08 16:22:25.296 2 DEBUG nova.compute.manager [req-e4823f51-46fe-487b-9811-65f0db68aba3 req-d595442b-8ee3-4996-a093-5ff401be3539 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Processing event network-vif-plugged-832861c7-4cbb-4865-bbf8-e006113d6965 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 08 16:22:25 compute-0 nova_compute[117413]: 2025-10-08 16:22:25.296 2 DEBUG nova.compute.manager [req-e4823f51-46fe-487b-9811-65f0db68aba3 req-d595442b-8ee3-4996-a093-5ff401be3539 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Received event network-changed-832861c7-4cbb-4865-bbf8-e006113d6965 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:22:25 compute-0 nova_compute[117413]: 2025-10-08 16:22:25.297 2 DEBUG nova.compute.manager [req-e4823f51-46fe-487b-9811-65f0db68aba3 req-d595442b-8ee3-4996-a093-5ff401be3539 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Refreshing instance network info cache due to event network-changed-832861c7-4cbb-4865-bbf8-e006113d6965. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 08 16:22:25 compute-0 nova_compute[117413]: 2025-10-08 16:22:25.297 2 DEBUG oslo_concurrency.lockutils [req-e4823f51-46fe-487b-9811-65f0db68aba3 req-d595442b-8ee3-4996-a093-5ff401be3539 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-f2872aac-1d69-4520-8f77-6fc89e222bbf" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:22:25 compute-0 nova_compute[117413]: 2025-10-08 16:22:25.298 2 DEBUG oslo_concurrency.lockutils [req-e4823f51-46fe-487b-9811-65f0db68aba3 req-d595442b-8ee3-4996-a093-5ff401be3539 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-f2872aac-1d69-4520-8f77-6fc89e222bbf" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:22:25 compute-0 nova_compute[117413]: 2025-10-08 16:22:25.298 2 DEBUG nova.network.neutron [req-e4823f51-46fe-487b-9811-65f0db68aba3 req-d595442b-8ee3-4996-a093-5ff401be3539 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Refreshing network info cache for port 832861c7-4cbb-4865-bbf8-e006113d6965 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 08 16:22:25 compute-0 nova_compute[117413]: 2025-10-08 16:22:25.301 2 DEBUG nova.compute.manager [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 08 16:22:25 compute-0 nova_compute[117413]: 2025-10-08 16:22:25.807 2 WARNING neutronclient.v2_0.client [req-e4823f51-46fe-487b-9811-65f0db68aba3 req-d595442b-8ee3-4996-a093-5ff401be3539 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:22:25 compute-0 nova_compute[117413]: 2025-10-08 16:22:25.814 2 DEBUG nova.compute.manager [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpr418ybnk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='f2872aac-1d69-4520-8f77-6fc89e222bbf',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(e5cb604f-9f92-4c0e-aa10-69ed163a3635),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Oct 08 16:22:25 compute-0 nova_compute[117413]: 2025-10-08 16:22:25.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:26 compute-0 nova_compute[117413]: 2025-10-08 16:22:26.156 2 WARNING neutronclient.v2_0.client [req-e4823f51-46fe-487b-9811-65f0db68aba3 req-d595442b-8ee3-4996-a093-5ff401be3539 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:22:26 compute-0 nova_compute[117413]: 2025-10-08 16:22:26.335 2 DEBUG nova.objects.instance [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'migration_context' on Instance uuid f2872aac-1d69-4520-8f77-6fc89e222bbf obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:22:26 compute-0 nova_compute[117413]: 2025-10-08 16:22:26.337 2 DEBUG nova.virt.libvirt.driver [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Oct 08 16:22:26 compute-0 nova_compute[117413]: 2025-10-08 16:22:26.340 2 DEBUG nova.network.neutron [req-e4823f51-46fe-487b-9811-65f0db68aba3 req-d595442b-8ee3-4996-a093-5ff401be3539 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Updated VIF entry in instance network info cache for port 832861c7-4cbb-4865-bbf8-e006113d6965. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Oct 08 16:22:26 compute-0 nova_compute[117413]: 2025-10-08 16:22:26.340 2 DEBUG nova.network.neutron [req-e4823f51-46fe-487b-9811-65f0db68aba3 req-d595442b-8ee3-4996-a093-5ff401be3539 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Updating instance_info_cache with network_info: [{"id": "832861c7-4cbb-4865-bbf8-e006113d6965", "address": "fa:16:3e:15:29:0f", "network": {"id": "1e1544c6-45c6-42c6-9964-06578beca8d2", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1767326805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb01c7700fd44ff4815f4e0cd565314f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832861c7-4c", "ovs_interfaceid": "832861c7-4cbb-4865-bbf8-e006113d6965", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:22:26 compute-0 nova_compute[117413]: 2025-10-08 16:22:26.344 2 DEBUG nova.virt.libvirt.driver [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 08 16:22:26 compute-0 nova_compute[117413]: 2025-10-08 16:22:26.345 2 DEBUG nova.virt.libvirt.driver [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 08 16:22:26 compute-0 nova_compute[117413]: 2025-10-08 16:22:26.848 2 DEBUG nova.virt.libvirt.driver [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 08 16:22:26 compute-0 nova_compute[117413]: 2025-10-08 16:22:26.849 2 DEBUG nova.virt.libvirt.driver [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 08 16:22:26 compute-0 nova_compute[117413]: 2025-10-08 16:22:26.851 2 DEBUG oslo_concurrency.lockutils [req-e4823f51-46fe-487b-9811-65f0db68aba3 req-d595442b-8ee3-4996-a093-5ff401be3539 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-f2872aac-1d69-4520-8f77-6fc89e222bbf" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:22:26 compute-0 nova_compute[117413]: 2025-10-08 16:22:26.855 2 DEBUG nova.virt.libvirt.vif [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-08T16:21:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteBasicStrategy-server-896091599',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutebasicstrategy-server-896091599',id=11,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:21:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5c9a1345c489441a8f545f201bb4a01a',ramdisk_id='',reservation_id='r-sidf9kof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='manager,admin,member,reader',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteBasicStrategy-1126837627',owner_user_name='tempest-TestExecuteBasicStrategy-1126837627-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:21:43Z,user_data=None,user_id='9c974ab1a4a14567b497ddc5498211f4',uuid=f2872aac-1d69-4520-8f77-6fc89e222bbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "832861c7-4cbb-4865-bbf8-e006113d6965", "address": "fa:16:3e:15:29:0f", "network": {"id": "1e1544c6-45c6-42c6-9964-06578beca8d2", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1767326805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb01c7700fd44ff4815f4e0cd565314f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap832861c7-4c", "ovs_interfaceid": "832861c7-4cbb-4865-bbf8-e006113d6965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 08 16:22:26 compute-0 nova_compute[117413]: 2025-10-08 16:22:26.855 2 DEBUG nova.network.os_vif_util [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converting VIF {"id": "832861c7-4cbb-4865-bbf8-e006113d6965", "address": "fa:16:3e:15:29:0f", "network": {"id": "1e1544c6-45c6-42c6-9964-06578beca8d2", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1767326805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb01c7700fd44ff4815f4e0cd565314f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap832861c7-4c", "ovs_interfaceid": "832861c7-4cbb-4865-bbf8-e006113d6965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:22:26 compute-0 nova_compute[117413]: 2025-10-08 16:22:26.857 2 DEBUG nova.network.os_vif_util [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:29:0f,bridge_name='br-int',has_traffic_filtering=True,id=832861c7-4cbb-4865-bbf8-e006113d6965,network=Network(1e1544c6-45c6-42c6-9964-06578beca8d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap832861c7-4c') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:22:26 compute-0 nova_compute[117413]: 2025-10-08 16:22:26.857 2 DEBUG nova.virt.libvirt.migration [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Updating guest XML with vif config: <interface type="ethernet">
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <mac address="fa:16:3e:15:29:0f"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <model type="virtio"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <driver name="vhost" rx_queue_size="512"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <mtu size="1442"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <target dev="tap832861c7-4c"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]: </interface>
Oct 08 16:22:26 compute-0 nova_compute[117413]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Oct 08 16:22:26 compute-0 nova_compute[117413]: 2025-10-08 16:22:26.859 2 DEBUG nova.virt.libvirt.migration [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <name>instance-0000000b</name>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <uuid>f2872aac-1d69-4520-8f77-6fc89e222bbf</uuid>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <metadata>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:package version="32.1.0-0.20251008114656.23cad1d.el10"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:name>tempest-TestExecuteBasicStrategy-server-896091599</nova:name>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:creationTime>2025-10-08 16:21:38</nova:creationTime>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:flavor name="m1.nano" id="43cd5d45-bd07-4889-a671-dd23291090c1">
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:memory>128</nova:memory>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:disk>1</nova:disk>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:swap>0</nova:swap>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:vcpus>1</nova:vcpus>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:extraSpecs>
Oct 08 16:22:26 compute-0 nova_compute[117413]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         </nova:extraSpecs>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       </nova:flavor>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:image uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43">
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:minDisk>1</nova:minDisk>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:minRam>0</nova:minRam>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:properties>
Oct 08 16:22:26 compute-0 nova_compute[117413]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         </nova:properties>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       </nova:image>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:owner>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:user uuid="9c974ab1a4a14567b497ddc5498211f4">tempest-TestExecuteBasicStrategy-1126837627-project-admin</nova:user>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:project uuid="5c9a1345c489441a8f545f201bb4a01a">tempest-TestExecuteBasicStrategy-1126837627</nova:project>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       </nova:owner>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:root type="image" uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:ports>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:port uuid="832861c7-4cbb-4865-bbf8-e006113d6965">
Oct 08 16:22:26 compute-0 nova_compute[117413]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         </nova:port>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       </nova:ports>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </nova:instance>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </metadata>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <memory unit="KiB">131072</memory>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <vcpu placement="static">1</vcpu>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <resource>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <partition>/machine</partition>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </resource>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <sysinfo type="smbios">
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <system>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <entry name="manufacturer">RDO</entry>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <entry name="product">OpenStack Compute</entry>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <entry name="version">32.1.0-0.20251008114656.23cad1d.el10</entry>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <entry name="serial">f2872aac-1d69-4520-8f77-6fc89e222bbf</entry>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <entry name="uuid">f2872aac-1d69-4520-8f77-6fc89e222bbf</entry>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <entry name="family">Virtual Machine</entry>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </system>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </sysinfo>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <os>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <boot dev="hd"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <smbios mode="sysinfo"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </os>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <features>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <acpi/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <apic/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <vmcoreinfo state="on"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </features>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <cpu mode="host-model" check="partial">
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <clock offset="utc">
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <timer name="hpet" present="no"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </clock>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <on_poweroff>destroy</on_poweroff>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <on_reboot>restart</on_reboot>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <on_crash>destroy</on_crash>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <disk type="file" device="disk">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target dev="vda" bus="virtio"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <disk type="file" device="cdrom">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <driver name="qemu" type="raw" cache="none"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk.config"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target dev="sda" bus="sata"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <readonly/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="1" port="0x10"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="2" port="0x11"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="3" port="0x12"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="4" port="0x13"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="5" port="0x14"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="6" port="0x15"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="7" port="0x16"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="8" port="0x17"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="9" port="0x18"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="10" port="0x19"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="11" port="0x1a"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="12" port="0x1b"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="13" port="0x1c"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="14" port="0x1d"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="15" port="0x1e"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="16" port="0x1f"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="17" port="0x20"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="18" port="0x21"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="19" port="0x22"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="20" port="0x23"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="21" port="0x24"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="22" port="0x25"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="23" port="0x26"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="24" port="0x27"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="25" port="0x28"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-pci-bridge"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="sata" index="0">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <interface type="ethernet"><mac address="fa:16:3e:15:29:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap832861c7-4c"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </interface><serial type="pty">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/console.log" append="off"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target type="isa-serial" port="0">
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <model name="isa-serial"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       </target>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </serial>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <console type="pty">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/console.log" append="off"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target type="serial" port="0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </console>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <input type="tablet" bus="usb">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="usb" bus="0" port="1"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </input>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <input type="mouse" bus="ps2"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <listen type="address" address="::"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </graphics>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <video>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model type="virtio" heads="1" primary="yes"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </video>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <stats period="10"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </memballoon>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <rng model="virtio">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <backend model="random">/dev/urandom</backend>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]: </domain>
Oct 08 16:22:26 compute-0 nova_compute[117413]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Oct 08 16:22:26 compute-0 nova_compute[117413]: 2025-10-08 16:22:26.860 2 DEBUG nova.virt.libvirt.migration [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <name>instance-0000000b</name>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <uuid>f2872aac-1d69-4520-8f77-6fc89e222bbf</uuid>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <metadata>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:package version="32.1.0-0.20251008114656.23cad1d.el10"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:name>tempest-TestExecuteBasicStrategy-server-896091599</nova:name>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:creationTime>2025-10-08 16:21:38</nova:creationTime>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:flavor name="m1.nano" id="43cd5d45-bd07-4889-a671-dd23291090c1">
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:memory>128</nova:memory>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:disk>1</nova:disk>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:swap>0</nova:swap>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:vcpus>1</nova:vcpus>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:extraSpecs>
Oct 08 16:22:26 compute-0 nova_compute[117413]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         </nova:extraSpecs>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       </nova:flavor>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:image uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43">
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:minDisk>1</nova:minDisk>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:minRam>0</nova:minRam>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:properties>
Oct 08 16:22:26 compute-0 nova_compute[117413]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         </nova:properties>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       </nova:image>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:owner>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:user uuid="9c974ab1a4a14567b497ddc5498211f4">tempest-TestExecuteBasicStrategy-1126837627-project-admin</nova:user>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:project uuid="5c9a1345c489441a8f545f201bb4a01a">tempest-TestExecuteBasicStrategy-1126837627</nova:project>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       </nova:owner>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:root type="image" uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:ports>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:port uuid="832861c7-4cbb-4865-bbf8-e006113d6965">
Oct 08 16:22:26 compute-0 nova_compute[117413]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         </nova:port>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       </nova:ports>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </nova:instance>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </metadata>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <memory unit="KiB">131072</memory>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <vcpu placement="static">1</vcpu>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <resource>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <partition>/machine</partition>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </resource>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <sysinfo type="smbios">
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <system>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <entry name="manufacturer">RDO</entry>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <entry name="product">OpenStack Compute</entry>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <entry name="version">32.1.0-0.20251008114656.23cad1d.el10</entry>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <entry name="serial">f2872aac-1d69-4520-8f77-6fc89e222bbf</entry>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <entry name="uuid">f2872aac-1d69-4520-8f77-6fc89e222bbf</entry>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <entry name="family">Virtual Machine</entry>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </system>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </sysinfo>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <os>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <boot dev="hd"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <smbios mode="sysinfo"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </os>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <features>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <acpi/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <apic/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <vmcoreinfo state="on"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </features>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <cpu mode="host-model" check="partial">
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <clock offset="utc">
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <timer name="hpet" present="no"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </clock>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <on_poweroff>destroy</on_poweroff>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <on_reboot>restart</on_reboot>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <on_crash>destroy</on_crash>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <disk type="file" device="disk">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target dev="vda" bus="virtio"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <disk type="file" device="cdrom">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <driver name="qemu" type="raw" cache="none"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk.config"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target dev="sda" bus="sata"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <readonly/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="1" port="0x10"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="2" port="0x11"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="3" port="0x12"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="4" port="0x13"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="5" port="0x14"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="6" port="0x15"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="7" port="0x16"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="8" port="0x17"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="9" port="0x18"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="10" port="0x19"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="11" port="0x1a"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="12" port="0x1b"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="13" port="0x1c"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="14" port="0x1d"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="15" port="0x1e"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="16" port="0x1f"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="17" port="0x20"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="18" port="0x21"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="19" port="0x22"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="20" port="0x23"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="21" port="0x24"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="22" port="0x25"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="23" port="0x26"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="24" port="0x27"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="25" port="0x28"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-pci-bridge"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="sata" index="0">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <interface type="ethernet"><mac address="fa:16:3e:15:29:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap832861c7-4c"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </interface><serial type="pty">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/console.log" append="off"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target type="isa-serial" port="0">
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <model name="isa-serial"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       </target>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </serial>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <console type="pty">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/console.log" append="off"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target type="serial" port="0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </console>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <input type="tablet" bus="usb">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="usb" bus="0" port="1"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </input>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <input type="mouse" bus="ps2"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <listen type="address" address="::"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </graphics>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <video>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model type="virtio" heads="1" primary="yes"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </video>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <stats period="10"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </memballoon>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <rng model="virtio">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <backend model="random">/dev/urandom</backend>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]: </domain>
Oct 08 16:22:26 compute-0 nova_compute[117413]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Oct 08 16:22:26 compute-0 nova_compute[117413]: 2025-10-08 16:22:26.861 2 DEBUG nova.virt.libvirt.migration [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] _update_pci_xml output xml=<domain type="kvm">
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <name>instance-0000000b</name>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <uuid>f2872aac-1d69-4520-8f77-6fc89e222bbf</uuid>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <metadata>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:package version="32.1.0-0.20251008114656.23cad1d.el10"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:name>tempest-TestExecuteBasicStrategy-server-896091599</nova:name>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:creationTime>2025-10-08 16:21:38</nova:creationTime>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:flavor name="m1.nano" id="43cd5d45-bd07-4889-a671-dd23291090c1">
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:memory>128</nova:memory>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:disk>1</nova:disk>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:swap>0</nova:swap>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:vcpus>1</nova:vcpus>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:extraSpecs>
Oct 08 16:22:26 compute-0 nova_compute[117413]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         </nova:extraSpecs>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       </nova:flavor>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:image uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43">
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:minDisk>1</nova:minDisk>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:minRam>0</nova:minRam>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:properties>
Oct 08 16:22:26 compute-0 nova_compute[117413]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         </nova:properties>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       </nova:image>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:owner>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:user uuid="9c974ab1a4a14567b497ddc5498211f4">tempest-TestExecuteBasicStrategy-1126837627-project-admin</nova:user>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:project uuid="5c9a1345c489441a8f545f201bb4a01a">tempest-TestExecuteBasicStrategy-1126837627</nova:project>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       </nova:owner>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:root type="image" uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <nova:ports>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <nova:port uuid="832861c7-4cbb-4865-bbf8-e006113d6965">
Oct 08 16:22:26 compute-0 nova_compute[117413]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:         </nova:port>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       </nova:ports>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </nova:instance>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </metadata>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <memory unit="KiB">131072</memory>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <vcpu placement="static">1</vcpu>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <resource>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <partition>/machine</partition>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </resource>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <sysinfo type="smbios">
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <system>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <entry name="manufacturer">RDO</entry>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <entry name="product">OpenStack Compute</entry>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <entry name="version">32.1.0-0.20251008114656.23cad1d.el10</entry>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <entry name="serial">f2872aac-1d69-4520-8f77-6fc89e222bbf</entry>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <entry name="uuid">f2872aac-1d69-4520-8f77-6fc89e222bbf</entry>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <entry name="family">Virtual Machine</entry>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </system>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </sysinfo>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <os>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <boot dev="hd"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <smbios mode="sysinfo"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </os>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <features>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <acpi/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <apic/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <vmcoreinfo state="on"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </features>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <cpu mode="host-model" check="partial">
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <clock offset="utc">
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <timer name="hpet" present="no"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </clock>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <on_poweroff>destroy</on_poweroff>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <on_reboot>restart</on_reboot>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <on_crash>destroy</on_crash>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <disk type="file" device="disk">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target dev="vda" bus="virtio"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <disk type="file" device="cdrom">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <driver name="qemu" type="raw" cache="none"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/disk.config"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target dev="sda" bus="sata"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <readonly/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="1" port="0x10"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="2" port="0x11"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="3" port="0x12"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="4" port="0x13"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="5" port="0x14"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="6" port="0x15"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="7" port="0x16"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="8" port="0x17"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="9" port="0x18"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="10" port="0x19"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="11" port="0x1a"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="12" port="0x1b"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="13" port="0x1c"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="14" port="0x1d"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="15" port="0x1e"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="16" port="0x1f"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="17" port="0x20"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="18" port="0x21"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="19" port="0x22"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="20" port="0x23"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="21" port="0x24"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="22" port="0x25"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="23" port="0x26"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="24" port="0x27"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target chassis="25" port="0x28"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model name="pcie-pci-bridge"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <controller type="sata" index="0">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <interface type="ethernet"><mac address="fa:16:3e:15:29:0f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap832861c7-4c"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </interface><serial type="pty">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/console.log" append="off"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target type="isa-serial" port="0">
Oct 08 16:22:26 compute-0 nova_compute[117413]:         <model name="isa-serial"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       </target>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </serial>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <console type="pty">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf/console.log" append="off"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <target type="serial" port="0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </console>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <input type="tablet" bus="usb">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="usb" bus="0" port="1"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </input>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <input type="mouse" bus="ps2"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <listen type="address" address="::"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </graphics>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <video>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <model type="virtio" heads="1" primary="yes"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </video>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <stats period="10"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </memballoon>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     <rng model="virtio">
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <backend model="random">/dev/urandom</backend>
Oct 08 16:22:26 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:22:26 compute-0 nova_compute[117413]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 08 16:22:26 compute-0 nova_compute[117413]: </domain>
Oct 08 16:22:26 compute-0 nova_compute[117413]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Oct 08 16:22:26 compute-0 nova_compute[117413]: 2025-10-08 16:22:26.861 2 DEBUG nova.virt.libvirt.driver [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Oct 08 16:22:27 compute-0 sshd-session[144707]: Failed password for root from 80.94.93.176 port 12138 ssh2
Oct 08 16:22:27 compute-0 unix_chkpwd[144761]: password check failed for user (root)
Oct 08 16:22:27 compute-0 nova_compute[117413]: 2025-10-08 16:22:27.352 2 DEBUG nova.virt.libvirt.migration [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Current None elapsed 1 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 08 16:22:27 compute-0 nova_compute[117413]: 2025-10-08 16:22:27.353 2 INFO nova.virt.libvirt.migration [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Increasing downtime to 50 ms after 0 sec elapsed time
Oct 08 16:22:28 compute-0 nova_compute[117413]: 2025-10-08 16:22:28.372 2 INFO nova.virt.libvirt.driver [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Oct 08 16:22:28 compute-0 nova_compute[117413]: 2025-10-08 16:22:28.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:28 compute-0 nova_compute[117413]: 2025-10-08 16:22:28.877 2 DEBUG nova.virt.libvirt.migration [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Current 50 elapsed 2 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 08 16:22:28 compute-0 nova_compute[117413]: 2025-10-08 16:22:28.878 2 DEBUG nova.virt.libvirt.migration [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.383 2 DEBUG nova.virt.libvirt.migration [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Current 50 elapsed 3 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.384 2 DEBUG nova.virt.libvirt.migration [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Downtime does not need to change update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:671
Oct 08 16:22:29 compute-0 sshd-session[144707]: Failed password for root from 80.94.93.176 port 12138 ssh2
Oct 08 16:22:29 compute-0 kernel: tap832861c7-4c (unregistering): left promiscuous mode
Oct 08 16:22:29 compute-0 NetworkManager[1034]: <info>  [1759940549.4445] device (tap832861c7-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:22:29 compute-0 ovn_controller[19768]: 2025-10-08T16:22:29Z|00094|binding|INFO|Releasing lport 832861c7-4cbb-4865-bbf8-e006113d6965 from this chassis (sb_readonly=0)
Oct 08 16:22:29 compute-0 ovn_controller[19768]: 2025-10-08T16:22:29Z|00095|binding|INFO|Setting lport 832861c7-4cbb-4865-bbf8-e006113d6965 down in Southbound
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:29 compute-0 ovn_controller[19768]: 2025-10-08T16:22:29Z|00096|binding|INFO|Removing iface tap832861c7-4c ovn-installed in OVS
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:29.466 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:29:0f 10.100.0.13'], port_security=['fa:16:3e:15:29:0f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '71d4f943-a908-4940-aa7e-dbc90ffb7e42'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f2872aac-1d69-4520-8f77-6fc89e222bbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1e1544c6-45c6-42c6-9964-06578beca8d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c9a1345c489441a8f545f201bb4a01a', 'neutron:revision_number': '10', 'neutron:security_group_ids': '0ba3c9eb-de8f-4b86-aefc-05e93c518d68', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c3f740dd-3a2b-4559-b226-01ccdb01f473, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=832861c7-4cbb-4865-bbf8-e006113d6965) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:22:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:29.467 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 832861c7-4cbb-4865-bbf8-e006113d6965 in datapath 1e1544c6-45c6-42c6-9964-06578beca8d2 unbound from our chassis
Oct 08 16:22:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:29.469 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1e1544c6-45c6-42c6-9964-06578beca8d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:22:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:29.470 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[954a427e-42df-49fe-bd8d-bb4e7308309f]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:22:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:29.471 28633 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2 namespace which is not needed anymore
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:29 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct 08 16:22:29 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000b.scope: Consumed 14.190s CPU time.
Oct 08 16:22:29 compute-0 systemd-machined[77548]: Machine qemu-7-instance-0000000b terminated.
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:29 compute-0 neutron-haproxy-ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2[144483]: [NOTICE]   (144502) : haproxy version is 3.0.5-8e879a5
Oct 08 16:22:29 compute-0 neutron-haproxy-ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2[144483]: [NOTICE]   (144502) : path to executable is /usr/sbin/haproxy
Oct 08 16:22:29 compute-0 neutron-haproxy-ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2[144483]: [WARNING]  (144502) : Exiting Master process...
Oct 08 16:22:29 compute-0 podman[144794]: 2025-10-08 16:22:29.667062461 +0000 UTC m=+0.054339691 container kill b44f92b1e8008982fba68153ac4cd84eed75249f9d5e684e2237ef9f58e24415 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:22:29 compute-0 neutron-haproxy-ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2[144483]: [ALERT]    (144502) : Current worker (144506) exited with code 143 (Terminated)
Oct 08 16:22:29 compute-0 neutron-haproxy-ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2[144483]: [WARNING]  (144502) : All workers exited. Exiting... (0)
Oct 08 16:22:29 compute-0 systemd[1]: libpod-b44f92b1e8008982fba68153ac4cd84eed75249f9d5e684e2237ef9f58e24415.scope: Deactivated successfully.
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.704 2 DEBUG nova.virt.libvirt.driver [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.705 2 DEBUG nova.virt.libvirt.driver [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.705 2 DEBUG nova.virt.libvirt.driver [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Oct 08 16:22:29 compute-0 podman[144818]: 2025-10-08 16:22:29.717932252 +0000 UTC m=+0.029323663 container died b44f92b1e8008982fba68153ac4cd84eed75249f9d5e684e2237ef9f58e24415 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Oct 08 16:22:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b44f92b1e8008982fba68153ac4cd84eed75249f9d5e684e2237ef9f58e24415-userdata-shm.mount: Deactivated successfully.
Oct 08 16:22:29 compute-0 podman[127881]: time="2025-10-08T16:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:22:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f84fcfd447c0ead69446ab6380b58a1c47065bc2757635b90c998d51b080781-merged.mount: Deactivated successfully.
Oct 08 16:22:29 compute-0 podman[144818]: 2025-10-08 16:22:29.775082982 +0000 UTC m=+0.086474383 container cleanup b44f92b1e8008982fba68153ac4cd84eed75249f9d5e684e2237ef9f58e24415 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest)
Oct 08 16:22:29 compute-0 systemd[1]: libpod-conmon-b44f92b1e8008982fba68153ac4cd84eed75249f9d5e684e2237ef9f58e24415.scope: Deactivated successfully.
Oct 08 16:22:29 compute-0 podman[144824]: 2025-10-08 16:22:29.789763323 +0000 UTC m=+0.083953160 container remove b44f92b1e8008982fba68153ac4cd84eed75249f9d5e684e2237ef9f58e24415 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 08 16:22:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20753 "" "Go-http-client/1.1"
Oct 08 16:22:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3014 "" "Go-http-client/1.1"
Oct 08 16:22:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:29.804 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[b52e10ce-537f-47f9-9392-a44d60731edc]: (4, ("Wed Oct  8 04:22:29 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2 (b44f92b1e8008982fba68153ac4cd84eed75249f9d5e684e2237ef9f58e24415)\nb44f92b1e8008982fba68153ac4cd84eed75249f9d5e684e2237ef9f58e24415\nWed Oct  8 04:22:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2 (b44f92b1e8008982fba68153ac4cd84eed75249f9d5e684e2237ef9f58e24415)\nb44f92b1e8008982fba68153ac4cd84eed75249f9d5e684e2237ef9f58e24415\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:22:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:29.806 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[edc3522c-2fa8-4e0e-8c23-47f6eda258af]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:22:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:29.806 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1e1544c6-45c6-42c6-9964-06578beca8d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1e1544c6-45c6-42c6-9964-06578beca8d2.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:22:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:29.807 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[69c93f28-e78c-4fc5-8201-3c379a65bb08]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:22:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:29.808 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e1544c6-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:29 compute-0 kernel: tap1e1544c6-40: left promiscuous mode
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:29.829 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[6069bf46-4e8c-44f4-89bc-99dc04992021]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:22:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:29.858 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[c83fe245-255a-489e-8092-3427f5edc907]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:22:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:29.859 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[66dffb41-ad95-45ea-b5ae-8eb1f88b57bf]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:22:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:29.885 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[48d2a008-06e8-493d-9924-bd8e2c5deddd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 178995, 'reachable_time': 39264, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 144857, 'error': None, 'target': 'ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.886 2 DEBUG nova.virt.libvirt.guest [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid 'f2872aac-1d69-4520-8f77-6fc89e222bbf' (instance-0000000b) get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.887 2 INFO nova.virt.libvirt.driver [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Migration operation has completed
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.887 2 INFO nova.compute.manager [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] _post_live_migration() is started..
Oct 08 16:22:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:29.887 28777 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1e1544c6-45c6-42c6-9964-06578beca8d2 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 08 16:22:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:29.887 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[337f22c9-68a1-4b2d-8d2c-11d47d9c0189]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:22:29 compute-0 systemd[1]: run-netns-ovnmeta\x2d1e1544c6\x2d45c6\x2d42c6\x2d9964\x2d06578beca8d2.mount: Deactivated successfully.
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.903 2 WARNING neutronclient.v2_0.client [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.903 2 WARNING neutronclient.v2_0.client [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.993 2 DEBUG nova.compute.manager [req-7c48f240-394c-4510-a571-802d5d039ed7 req-3e0a6387-8994-40c2-881e-e654b4310c2b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Received event network-vif-unplugged-832861c7-4cbb-4865-bbf8-e006113d6965 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.994 2 DEBUG oslo_concurrency.lockutils [req-7c48f240-394c-4510-a571-802d5d039ed7 req-3e0a6387-8994-40c2-881e-e654b4310c2b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.994 2 DEBUG oslo_concurrency.lockutils [req-7c48f240-394c-4510-a571-802d5d039ed7 req-3e0a6387-8994-40c2-881e-e654b4310c2b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.995 2 DEBUG oslo_concurrency.lockutils [req-7c48f240-394c-4510-a571-802d5d039ed7 req-3e0a6387-8994-40c2-881e-e654b4310c2b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.995 2 DEBUG nova.compute.manager [req-7c48f240-394c-4510-a571-802d5d039ed7 req-3e0a6387-8994-40c2-881e-e654b4310c2b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] No waiting events found dispatching network-vif-unplugged-832861c7-4cbb-4865-bbf8-e006113d6965 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:22:29 compute-0 nova_compute[117413]: 2025-10-08 16:22:29.995 2 DEBUG nova.compute.manager [req-7c48f240-394c-4510-a571-802d5d039ed7 req-3e0a6387-8994-40c2-881e-e654b4310c2b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Received event network-vif-unplugged-832861c7-4cbb-4865-bbf8-e006113d6965 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:22:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:30.229 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.371 2 DEBUG nova.network.neutron [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Activated binding for port 832861c7-4cbb-4865-bbf8-e006113d6965 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.372 2 DEBUG nova.compute.manager [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "832861c7-4cbb-4865-bbf8-e006113d6965", "address": "fa:16:3e:15:29:0f", "network": {"id": "1e1544c6-45c6-42c6-9964-06578beca8d2", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1767326805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb01c7700fd44ff4815f4e0cd565314f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832861c7-4c", "ovs_interfaceid": "832861c7-4cbb-4865-bbf8-e006113d6965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.374 2 DEBUG nova.virt.libvirt.vif [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-08T16:21:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteBasicStrategy-server-896091599',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutebasicstrategy-server-896091599',id=11,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:21:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='5c9a1345c489441a8f545f201bb4a01a',ramdisk_id='',reservation_id='r-sidf9kof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='manager,admin,member,reader',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteBasicStrategy-1126837627',owner_user_name='tempest-TestExecuteBasicStrategy-1126837627-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:22:07Z,user_data=None,user_id='9c974ab1a4a14567b497ddc5498211f4',uuid=f2872aac-1d69-4520-8f77-6fc89e222bbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "832861c7-4cbb-4865-bbf8-e006113d6965", "address": "fa:16:3e:15:29:0f", "network": {"id": "1e1544c6-45c6-42c6-9964-06578beca8d2", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1767326805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb01c7700fd44ff4815f4e0cd565314f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832861c7-4c", "ovs_interfaceid": "832861c7-4cbb-4865-bbf8-e006113d6965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.374 2 DEBUG nova.network.os_vif_util [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converting VIF {"id": "832861c7-4cbb-4865-bbf8-e006113d6965", "address": "fa:16:3e:15:29:0f", "network": {"id": "1e1544c6-45c6-42c6-9964-06578beca8d2", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1767326805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eb01c7700fd44ff4815f4e0cd565314f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832861c7-4c", "ovs_interfaceid": "832861c7-4cbb-4865-bbf8-e006113d6965", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.375 2 DEBUG nova.network.os_vif_util [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:29:0f,bridge_name='br-int',has_traffic_filtering=True,id=832861c7-4cbb-4865-bbf8-e006113d6965,network=Network(1e1544c6-45c6-42c6-9964-06578beca8d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap832861c7-4c') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.376 2 DEBUG os_vif [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:29:0f,bridge_name='br-int',has_traffic_filtering=True,id=832861c7-4cbb-4865-bbf8-e006113d6965,network=Network(1e1544c6-45c6-42c6-9964-06578beca8d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap832861c7-4c') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.380 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap832861c7-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.388 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=57c78d7b-33be-4bb1-8615-070e99b07649) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.392 2 INFO os_vif [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:29:0f,bridge_name='br-int',has_traffic_filtering=True,id=832861c7-4cbb-4865-bbf8-e006113d6965,network=Network(1e1544c6-45c6-42c6-9964-06578beca8d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap832861c7-4c')
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.392 2 DEBUG oslo_concurrency.lockutils [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.393 2 DEBUG oslo_concurrency.lockutils [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.393 2 DEBUG oslo_concurrency.lockutils [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.394 2 DEBUG nova.compute.manager [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.394 2 INFO nova.virt.libvirt.driver [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Deleting instance files /var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf_del
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.395 2 INFO nova.virt.libvirt.driver [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Deletion of /var/lib/nova/instances/f2872aac-1d69-4520-8f77-6fc89e222bbf_del complete
Oct 08 16:22:30 compute-0 nova_compute[117413]: 2025-10-08 16:22:30.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:31 compute-0 sshd-session[144707]: Received disconnect from 80.94.93.176 port 12138:11:  [preauth]
Oct 08 16:22:31 compute-0 sshd-session[144707]: Disconnected from authenticating user root 80.94.93.176 port 12138 [preauth]
Oct 08 16:22:31 compute-0 sshd-session[144707]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 08 16:22:31 compute-0 openstack_network_exporter[130039]: ERROR   16:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:22:31 compute-0 openstack_network_exporter[130039]: ERROR   16:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:22:31 compute-0 openstack_network_exporter[130039]: ERROR   16:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:22:31 compute-0 openstack_network_exporter[130039]: ERROR   16:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:22:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:22:31 compute-0 openstack_network_exporter[130039]: ERROR   16:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:22:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:22:31 compute-0 unix_chkpwd[144860]: password check failed for user (root)
Oct 08 16:22:31 compute-0 sshd-session[144858]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.043 2 DEBUG nova.compute.manager [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Received event network-vif-plugged-832861c7-4cbb-4865-bbf8-e006113d6965 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.044 2 DEBUG oslo_concurrency.lockutils [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.045 2 DEBUG oslo_concurrency.lockutils [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.045 2 DEBUG oslo_concurrency.lockutils [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.045 2 DEBUG nova.compute.manager [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] No waiting events found dispatching network-vif-plugged-832861c7-4cbb-4865-bbf8-e006113d6965 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.046 2 WARNING nova.compute.manager [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Received unexpected event network-vif-plugged-832861c7-4cbb-4865-bbf8-e006113d6965 for instance with vm_state active and task_state migrating.
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.046 2 DEBUG nova.compute.manager [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Received event network-vif-unplugged-832861c7-4cbb-4865-bbf8-e006113d6965 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.046 2 DEBUG oslo_concurrency.lockutils [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.046 2 DEBUG oslo_concurrency.lockutils [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.046 2 DEBUG oslo_concurrency.lockutils [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.047 2 DEBUG nova.compute.manager [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] No waiting events found dispatching network-vif-unplugged-832861c7-4cbb-4865-bbf8-e006113d6965 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.047 2 DEBUG nova.compute.manager [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Received event network-vif-unplugged-832861c7-4cbb-4865-bbf8-e006113d6965 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.047 2 DEBUG nova.compute.manager [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Received event network-vif-unplugged-832861c7-4cbb-4865-bbf8-e006113d6965 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.047 2 DEBUG oslo_concurrency.lockutils [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.048 2 DEBUG oslo_concurrency.lockutils [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.048 2 DEBUG oslo_concurrency.lockutils [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.048 2 DEBUG nova.compute.manager [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] No waiting events found dispatching network-vif-unplugged-832861c7-4cbb-4865-bbf8-e006113d6965 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.048 2 DEBUG nova.compute.manager [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Received event network-vif-unplugged-832861c7-4cbb-4865-bbf8-e006113d6965 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.049 2 DEBUG nova.compute.manager [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Received event network-vif-plugged-832861c7-4cbb-4865-bbf8-e006113d6965 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.049 2 DEBUG oslo_concurrency.lockutils [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.049 2 DEBUG oslo_concurrency.lockutils [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.049 2 DEBUG oslo_concurrency.lockutils [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.050 2 DEBUG nova.compute.manager [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] No waiting events found dispatching network-vif-plugged-832861c7-4cbb-4865-bbf8-e006113d6965 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.050 2 WARNING nova.compute.manager [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Received unexpected event network-vif-plugged-832861c7-4cbb-4865-bbf8-e006113d6965 for instance with vm_state active and task_state migrating.
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.050 2 DEBUG nova.compute.manager [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Received event network-vif-plugged-832861c7-4cbb-4865-bbf8-e006113d6965 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.051 2 DEBUG oslo_concurrency.lockutils [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.051 2 DEBUG oslo_concurrency.lockutils [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.051 2 DEBUG oslo_concurrency.lockutils [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.051 2 DEBUG nova.compute.manager [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] No waiting events found dispatching network-vif-plugged-832861c7-4cbb-4865-bbf8-e006113d6965 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:22:32 compute-0 nova_compute[117413]: 2025-10-08 16:22:32.052 2 WARNING nova.compute.manager [req-72602928-cc85-436f-99c9-980b48bec824 req-69e040b3-5e22-4f2a-abe5-6fd1c9ea310f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Received unexpected event network-vif-plugged-832861c7-4cbb-4865-bbf8-e006113d6965 for instance with vm_state active and task_state migrating.
Oct 08 16:22:33 compute-0 sshd-session[144858]: Failed password for root from 80.94.93.176 port 42794 ssh2
Oct 08 16:22:33 compute-0 podman[144861]: 2025-10-08 16:22:33.484636707 +0000 UTC m=+0.080743279 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=multipathd)
Oct 08 16:22:33 compute-0 unix_chkpwd[144882]: password check failed for user (root)
Oct 08 16:22:35 compute-0 nova_compute[117413]: 2025-10-08 16:22:35.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:35 compute-0 sshd-session[144858]: Failed password for root from 80.94.93.176 port 42794 ssh2
Oct 08 16:22:35 compute-0 unix_chkpwd[144883]: password check failed for user (root)
Oct 08 16:22:35 compute-0 nova_compute[117413]: 2025-10-08 16:22:35.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:37 compute-0 podman[144884]: 2025-10-08 16:22:37.506766326 +0000 UTC m=+0.098470178 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, version=9.6, config_id=edpm, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal)
Oct 08 16:22:37 compute-0 sshd-session[144858]: Failed password for root from 80.94.93.176 port 42794 ssh2
Oct 08 16:22:38 compute-0 nova_compute[117413]: 2025-10-08 16:22:38.930 2 DEBUG oslo_concurrency.lockutils [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:22:38 compute-0 nova_compute[117413]: 2025-10-08 16:22:38.930 2 DEBUG oslo_concurrency.lockutils [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:22:38 compute-0 nova_compute[117413]: 2025-10-08 16:22:38.931 2 DEBUG oslo_concurrency.lockutils [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "f2872aac-1d69-4520-8f77-6fc89e222bbf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:22:39 compute-0 nova_compute[117413]: 2025-10-08 16:22:39.445 2 DEBUG oslo_concurrency.lockutils [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:22:39 compute-0 nova_compute[117413]: 2025-10-08 16:22:39.445 2 DEBUG oslo_concurrency.lockutils [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:22:39 compute-0 nova_compute[117413]: 2025-10-08 16:22:39.446 2 DEBUG oslo_concurrency.lockutils [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:22:39 compute-0 nova_compute[117413]: 2025-10-08 16:22:39.446 2 DEBUG nova.compute.resource_tracker [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:22:39 compute-0 nova_compute[117413]: 2025-10-08 16:22:39.623 2 WARNING nova.virt.libvirt.driver [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:22:39 compute-0 nova_compute[117413]: 2025-10-08 16:22:39.624 2 DEBUG oslo_concurrency.processutils [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:22:39 compute-0 nova_compute[117413]: 2025-10-08 16:22:39.645 2 DEBUG oslo_concurrency.processutils [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.021s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:22:39 compute-0 nova_compute[117413]: 2025-10-08 16:22:39.646 2 DEBUG nova.compute.resource_tracker [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6159MB free_disk=73.25469589233398GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:22:39 compute-0 nova_compute[117413]: 2025-10-08 16:22:39.647 2 DEBUG oslo_concurrency.lockutils [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:22:39 compute-0 nova_compute[117413]: 2025-10-08 16:22:39.647 2 DEBUG oslo_concurrency.lockutils [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:22:39 compute-0 sshd-session[144858]: Received disconnect from 80.94.93.176 port 42794:11:  [preauth]
Oct 08 16:22:39 compute-0 sshd-session[144858]: Disconnected from authenticating user root 80.94.93.176 port 42794 [preauth]
Oct 08 16:22:39 compute-0 sshd-session[144858]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.176  user=root
Oct 08 16:22:40 compute-0 nova_compute[117413]: 2025-10-08 16:22:40.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:40 compute-0 nova_compute[117413]: 2025-10-08 16:22:40.669 2 DEBUG nova.compute.resource_tracker [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Migration for instance f2872aac-1d69-4520-8f77-6fc89e222bbf refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 08 16:22:40 compute-0 nova_compute[117413]: 2025-10-08 16:22:40.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:41 compute-0 nova_compute[117413]: 2025-10-08 16:22:41.178 2 DEBUG nova.compute.resource_tracker [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Oct 08 16:22:41 compute-0 nova_compute[117413]: 2025-10-08 16:22:41.215 2 DEBUG nova.compute.resource_tracker [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Migration e5cb604f-9f92-4c0e-aa10-69ed163a3635 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 08 16:22:41 compute-0 nova_compute[117413]: 2025-10-08 16:22:41.215 2 DEBUG nova.compute.resource_tracker [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:22:41 compute-0 nova_compute[117413]: 2025-10-08 16:22:41.216 2 DEBUG nova.compute.resource_tracker [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:22:39 up 30 min,  0 user,  load average: 0.21, 0.26, 0.28\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:22:41 compute-0 nova_compute[117413]: 2025-10-08 16:22:41.257 2 DEBUG nova.compute.provider_tree [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:22:41 compute-0 nova_compute[117413]: 2025-10-08 16:22:41.767 2 DEBUG nova.scheduler.client.report [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:22:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:41.898 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:22:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:41.898 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:22:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:22:41.898 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:22:42 compute-0 nova_compute[117413]: 2025-10-08 16:22:42.278 2 DEBUG nova.compute.resource_tracker [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:22:42 compute-0 nova_compute[117413]: 2025-10-08 16:22:42.279 2 DEBUG oslo_concurrency.lockutils [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.632s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:22:42 compute-0 nova_compute[117413]: 2025-10-08 16:22:42.298 2 INFO nova.compute.manager [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Oct 08 16:22:43 compute-0 nova_compute[117413]: 2025-10-08 16:22:43.377 2 INFO nova.scheduler.client.report [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Deleted allocation for migration e5cb604f-9f92-4c0e-aa10-69ed163a3635
Oct 08 16:22:43 compute-0 nova_compute[117413]: 2025-10-08 16:22:43.377 2 DEBUG nova.virt.libvirt.driver [None req-b73a40dd-b912-47f7-97f7-335d617e2c57 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: f2872aac-1d69-4520-8f77-6fc89e222bbf] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Oct 08 16:22:43 compute-0 podman[144909]: 2025-10-08 16:22:43.466412953 +0000 UTC m=+0.071012340 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 08 16:22:45 compute-0 nova_compute[117413]: 2025-10-08 16:22:45.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:45 compute-0 nova_compute[117413]: 2025-10-08 16:22:45.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:49 compute-0 podman[144929]: 2025-10-08 16:22:49.457302236 +0000 UTC m=+0.063525974 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007)
Oct 08 16:22:50 compute-0 nova_compute[117413]: 2025-10-08 16:22:50.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:50 compute-0 nova_compute[117413]: 2025-10-08 16:22:50.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:54 compute-0 podman[144948]: 2025-10-08 16:22:54.461379402 +0000 UTC m=+0.068098056 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 08 16:22:54 compute-0 podman[144949]: 2025-10-08 16:22:54.507202977 +0000 UTC m=+0.105140080 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Oct 08 16:22:55 compute-0 nova_compute[117413]: 2025-10-08 16:22:55.311 2 DEBUG nova.compute.manager [None req-838e481b-ba00-4380-b86c-635aa02ac7eb b67951c3098d4bf994e39f4ac55e142e b2eb43725f1e4dbfa51aeb475eac607e - - default default] Removing trait COMPUTE_STATUS_DISABLED from compute node resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 in placement. update_compute_provider_status /usr/lib/python3.12/site-packages/nova/compute/manager.py:631
Oct 08 16:22:55 compute-0 nova_compute[117413]: 2025-10-08 16:22:55.354 2 DEBUG nova.compute.provider_tree [None req-838e481b-ba00-4380-b86c-635aa02ac7eb b67951c3098d4bf994e39f4ac55e142e b2eb43725f1e4dbfa51aeb475eac607e - - default default] Updating resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 generation from 11 to 14 during operation: update_traits _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Oct 08 16:22:55 compute-0 nova_compute[117413]: 2025-10-08 16:22:55.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:55 compute-0 nova_compute[117413]: 2025-10-08 16:22:55.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:22:58 compute-0 nova_compute[117413]: 2025-10-08 16:22:58.871 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:22:59 compute-0 nova_compute[117413]: 2025-10-08 16:22:59.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:22:59 compute-0 nova_compute[117413]: 2025-10-08 16:22:59.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:22:59 compute-0 nova_compute[117413]: 2025-10-08 16:22:59.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:22:59 compute-0 nova_compute[117413]: 2025-10-08 16:22:59.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:22:59 compute-0 podman[127881]: time="2025-10-08T16:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:22:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:22:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3020 "" "Go-http-client/1.1"
Oct 08 16:22:59 compute-0 nova_compute[117413]: 2025-10-08 16:22:59.877 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:22:59 compute-0 nova_compute[117413]: 2025-10-08 16:22:59.879 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:22:59 compute-0 nova_compute[117413]: 2025-10-08 16:22:59.879 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:22:59 compute-0 nova_compute[117413]: 2025-10-08 16:22:59.879 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:23:00 compute-0 nova_compute[117413]: 2025-10-08 16:23:00.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:00 compute-0 nova_compute[117413]: 2025-10-08 16:23:00.061 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:23:00 compute-0 nova_compute[117413]: 2025-10-08 16:23:00.063 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:23:00 compute-0 nova_compute[117413]: 2025-10-08 16:23:00.090 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.027s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:23:00 compute-0 nova_compute[117413]: 2025-10-08 16:23:00.091 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6164MB free_disk=73.25469589233398GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:23:00 compute-0 nova_compute[117413]: 2025-10-08 16:23:00.091 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:23:00 compute-0 nova_compute[117413]: 2025-10-08 16:23:00.092 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:23:00 compute-0 nova_compute[117413]: 2025-10-08 16:23:00.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:00 compute-0 nova_compute[117413]: 2025-10-08 16:23:00.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:01 compute-0 nova_compute[117413]: 2025-10-08 16:23:01.130 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:23:01 compute-0 nova_compute[117413]: 2025-10-08 16:23:01.130 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:23:00 up 31 min,  0 user,  load average: 0.15, 0.24, 0.27\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:23:01 compute-0 nova_compute[117413]: 2025-10-08 16:23:01.162 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:23:01 compute-0 openstack_network_exporter[130039]: ERROR   16:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:23:01 compute-0 openstack_network_exporter[130039]: ERROR   16:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:23:01 compute-0 openstack_network_exporter[130039]: ERROR   16:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:23:01 compute-0 openstack_network_exporter[130039]: ERROR   16:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:23:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:23:01 compute-0 openstack_network_exporter[130039]: ERROR   16:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:23:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:23:01 compute-0 nova_compute[117413]: 2025-10-08 16:23:01.673 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:23:02 compute-0 nova_compute[117413]: 2025-10-08 16:23:02.183 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:23:02 compute-0 nova_compute[117413]: 2025-10-08 16:23:02.184 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.092s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:23:03 compute-0 nova_compute[117413]: 2025-10-08 16:23:03.185 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:23:03 compute-0 nova_compute[117413]: 2025-10-08 16:23:03.185 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:23:03 compute-0 nova_compute[117413]: 2025-10-08 16:23:03.185 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:23:03 compute-0 nova_compute[117413]: 2025-10-08 16:23:03.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:23:04 compute-0 podman[145002]: 2025-10-08 16:23:04.446749789 +0000 UTC m=+0.056429182 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 08 16:23:05 compute-0 nova_compute[117413]: 2025-10-08 16:23:05.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:05 compute-0 nova_compute[117413]: 2025-10-08 16:23:05.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:08 compute-0 podman[145023]: 2025-10-08 16:23:08.489452289 +0000 UTC m=+0.085275519 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., name=ubi9-minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 08 16:23:10 compute-0 nova_compute[117413]: 2025-10-08 16:23:10.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:10 compute-0 nova_compute[117413]: 2025-10-08 16:23:10.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:14 compute-0 podman[145046]: 2025-10-08 16:23:14.46356394 +0000 UTC m=+0.067093668 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 16:23:14 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:23:14.965 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:a2:61 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-9861c3ac-dfb3-439b-b0e1-4ad7551ff45f', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9861c3ac-dfb3-439b-b0e1-4ad7551ff45f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33d1316c725d420cb338d04ca3c280e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=062e8076-ab1d-4027-9a70-8ad04e1ca4de, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=93f68cb6-4666-457d-b4d6-a414174ba6f1) old=Port_Binding(mac=['fa:16:3e:dd:a2:61'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-9861c3ac-dfb3-439b-b0e1-4ad7551ff45f', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9861c3ac-dfb3-439b-b0e1-4ad7551ff45f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33d1316c725d420cb338d04ca3c280e5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:23:14 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:23:14.966 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 93f68cb6-4666-457d-b4d6-a414174ba6f1 in datapath 9861c3ac-dfb3-439b-b0e1-4ad7551ff45f updated
Oct 08 16:23:14 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:23:14.966 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9861c3ac-dfb3-439b-b0e1-4ad7551ff45f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:23:14 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:23:14.967 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[3747ac28-f5eb-479b-a81d-ec5d539a56ee]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:23:15 compute-0 nova_compute[117413]: 2025-10-08 16:23:15.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:15 compute-0 nova_compute[117413]: 2025-10-08 16:23:15.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:20 compute-0 nova_compute[117413]: 2025-10-08 16:23:20.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:20 compute-0 podman[145067]: 2025-10-08 16:23:20.458573712 +0000 UTC m=+0.062639799 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS)
Oct 08 16:23:20 compute-0 nova_compute[117413]: 2025-10-08 16:23:20.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:23:22.060 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:ed:e7 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6b09fe14-2e77-4a83-976f-bc78248804be', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b09fe14-2e77-4a83-976f-bc78248804be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e38a501b78934f8ba6facd79491de4da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f51555d4-e233-4984-9dc8-9468c758a040, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=6bd196fe-7f01-4f9a-bbef-1230e1538a58) old=Port_Binding(mac=['fa:16:3e:cb:ed:e7'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-6b09fe14-2e77-4a83-976f-bc78248804be', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b09fe14-2e77-4a83-976f-bc78248804be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e38a501b78934f8ba6facd79491de4da', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:23:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:23:22.062 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 6bd196fe-7f01-4f9a-bbef-1230e1538a58 in datapath 6b09fe14-2e77-4a83-976f-bc78248804be updated
Oct 08 16:23:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:23:22.063 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6b09fe14-2e77-4a83-976f-bc78248804be, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:23:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:23:22.064 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff44dac-5971-4d5c-b55b-720cc6d51ce6]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:23:25 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:23:25.148 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:23:25 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:23:25.149 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:23:25 compute-0 nova_compute[117413]: 2025-10-08 16:23:25.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:25 compute-0 nova_compute[117413]: 2025-10-08 16:23:25.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:25 compute-0 podman[145087]: 2025-10-08 16:23:25.442265604 +0000 UTC m=+0.044042075 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:23:25 compute-0 podman[145088]: 2025-10-08 16:23:25.483882479 +0000 UTC m=+0.083927611 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 16:23:25 compute-0 nova_compute[117413]: 2025-10-08 16:23:25.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:29 compute-0 podman[127881]: time="2025-10-08T16:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:23:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:23:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3020 "" "Go-http-client/1.1"
Oct 08 16:23:30 compute-0 nova_compute[117413]: 2025-10-08 16:23:30.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:30 compute-0 nova_compute[117413]: 2025-10-08 16:23:30.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:31 compute-0 openstack_network_exporter[130039]: ERROR   16:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:23:31 compute-0 openstack_network_exporter[130039]: ERROR   16:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:23:31 compute-0 openstack_network_exporter[130039]: ERROR   16:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:23:31 compute-0 openstack_network_exporter[130039]: ERROR   16:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:23:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:23:31 compute-0 openstack_network_exporter[130039]: ERROR   16:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:23:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:23:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:23:33.151 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:23:35 compute-0 ovn_controller[19768]: 2025-10-08T16:23:35Z|00097|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct 08 16:23:35 compute-0 nova_compute[117413]: 2025-10-08 16:23:35.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:35 compute-0 podman[145137]: 2025-10-08 16:23:35.450477618 +0000 UTC m=+0.060583530 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest)
Oct 08 16:23:35 compute-0 nova_compute[117413]: 2025-10-08 16:23:35.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:39 compute-0 podman[145159]: 2025-10-08 16:23:39.457907235 +0000 UTC m=+0.061812375 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, vcs-type=git, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 08 16:23:40 compute-0 nova_compute[117413]: 2025-10-08 16:23:40.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:40 compute-0 nova_compute[117413]: 2025-10-08 16:23:40.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:23:41.899 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:23:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:23:41.899 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:23:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:23:41.900 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:23:45 compute-0 nova_compute[117413]: 2025-10-08 16:23:45.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:45 compute-0 podman[145181]: 2025-10-08 16:23:45.474434443 +0000 UTC m=+0.083102097 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 08 16:23:45 compute-0 nova_compute[117413]: 2025-10-08 16:23:45.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:50 compute-0 nova_compute[117413]: 2025-10-08 16:23:50.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:50 compute-0 nova_compute[117413]: 2025-10-08 16:23:50.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:51 compute-0 podman[145201]: 2025-10-08 16:23:51.439742952 +0000 UTC m=+0.049699128 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 08 16:23:51 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:23:51.972 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:36:b7 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff7ecc0bf1374713b56acbc9eabf6d9e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ac4acf5-70fa-4592-a711-3c63ae37ec88, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c11878dc-b81c-4cd4-8280-26645e84c0d9) old=Port_Binding(mac=['fa:16:3e:9d:36:b7'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff7ecc0bf1374713b56acbc9eabf6d9e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:23:51 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:23:51.973 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c11878dc-b81c-4cd4-8280-26645e84c0d9 in datapath 56ad396c-4245-4eb9-9237-69e9ea6a760a updated
Oct 08 16:23:51 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:23:51.973 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 56ad396c-4245-4eb9-9237-69e9ea6a760a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:23:51 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:23:51.974 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[4912a0ae-f169-49cc-83ba-dbc75c3d3f7b]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:23:55 compute-0 nova_compute[117413]: 2025-10-08 16:23:55.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:55 compute-0 nova_compute[117413]: 2025-10-08 16:23:55.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:23:56 compute-0 podman[145220]: 2025-10-08 16:23:56.457681895 +0000 UTC m=+0.048457462 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 08 16:23:56 compute-0 podman[145221]: 2025-10-08 16:23:56.519731997 +0000 UTC m=+0.105121659 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 08 16:23:58 compute-0 nova_compute[117413]: 2025-10-08 16:23:58.360 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:23:59 compute-0 nova_compute[117413]: 2025-10-08 16:23:59.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:23:59 compute-0 podman[127881]: time="2025-10-08T16:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:23:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:23:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3016 "" "Go-http-client/1.1"
Oct 08 16:23:59 compute-0 nova_compute[117413]: 2025-10-08 16:23:59.878 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:23:59 compute-0 nova_compute[117413]: 2025-10-08 16:23:59.878 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:23:59 compute-0 nova_compute[117413]: 2025-10-08 16:23:59.878 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:23:59 compute-0 nova_compute[117413]: 2025-10-08 16:23:59.879 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:24:00 compute-0 nova_compute[117413]: 2025-10-08 16:24:00.033 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:24:00 compute-0 nova_compute[117413]: 2025-10-08 16:24:00.034 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:24:00 compute-0 nova_compute[117413]: 2025-10-08 16:24:00.057 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.023s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:24:00 compute-0 nova_compute[117413]: 2025-10-08 16:24:00.058 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6160MB free_disk=73.25474548339844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:24:00 compute-0 nova_compute[117413]: 2025-10-08 16:24:00.058 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:24:00 compute-0 nova_compute[117413]: 2025-10-08 16:24:00.059 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:24:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:00.246 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:94:98 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-6bd6e161-da8a-4248-99e6-e4444f347c2c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6bd6e161-da8a-4248-99e6-e4444f347c2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1820638f7dc1498db1dd11607c4370f2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6d303b7-0a73-4491-a3e1-3f0d5feb38bf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a1149839-714a-4bdd-8b63-fc54bec96c9e) old=Port_Binding(mac=['fa:16:3e:4c:94:98'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-6bd6e161-da8a-4248-99e6-e4444f347c2c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6bd6e161-da8a-4248-99e6-e4444f347c2c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1820638f7dc1498db1dd11607c4370f2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:24:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:00.247 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a1149839-714a-4bdd-8b63-fc54bec96c9e in datapath 6bd6e161-da8a-4248-99e6-e4444f347c2c updated
Oct 08 16:24:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:00.248 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6bd6e161-da8a-4248-99e6-e4444f347c2c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:24:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:00.249 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0e9b3c-de90-4e0a-8934-034c35fb0647]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:24:00 compute-0 nova_compute[117413]: 2025-10-08 16:24:00.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:00 compute-0 nova_compute[117413]: 2025-10-08 16:24:00.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:01 compute-0 nova_compute[117413]: 2025-10-08 16:24:01.118 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:24:01 compute-0 nova_compute[117413]: 2025-10-08 16:24:01.118 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:24:00 up 32 min,  0 user,  load average: 0.05, 0.19, 0.25\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:24:01 compute-0 nova_compute[117413]: 2025-10-08 16:24:01.137 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:24:01 compute-0 openstack_network_exporter[130039]: ERROR   16:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:24:01 compute-0 openstack_network_exporter[130039]: ERROR   16:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:24:01 compute-0 openstack_network_exporter[130039]: ERROR   16:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:24:01 compute-0 openstack_network_exporter[130039]: ERROR   16:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:24:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:24:01 compute-0 openstack_network_exporter[130039]: ERROR   16:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:24:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:24:01 compute-0 anacron[90088]: Job `cron.daily' started
Oct 08 16:24:01 compute-0 anacron[90088]: Job `cron.daily' terminated
Oct 08 16:24:01 compute-0 nova_compute[117413]: 2025-10-08 16:24:01.643 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:24:02 compute-0 nova_compute[117413]: 2025-10-08 16:24:02.159 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:24:02 compute-0 nova_compute[117413]: 2025-10-08 16:24:02.159 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.100s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:24:03 compute-0 nova_compute[117413]: 2025-10-08 16:24:03.159 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:24:03 compute-0 nova_compute[117413]: 2025-10-08 16:24:03.159 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:24:03 compute-0 nova_compute[117413]: 2025-10-08 16:24:03.160 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:24:03 compute-0 nova_compute[117413]: 2025-10-08 16:24:03.160 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:24:03 compute-0 nova_compute[117413]: 2025-10-08 16:24:03.160 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:24:03 compute-0 nova_compute[117413]: 2025-10-08 16:24:03.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:24:04 compute-0 nova_compute[117413]: 2025-10-08 16:24:04.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:24:04 compute-0 nova_compute[117413]: 2025-10-08 16:24:04.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:24:05 compute-0 nova_compute[117413]: 2025-10-08 16:24:05.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:05 compute-0 nova_compute[117413]: 2025-10-08 16:24:05.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:06 compute-0 podman[145272]: 2025-10-08 16:24:06.496932971 +0000 UTC m=+0.105285114 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, container_name=multipathd, org.label-schema.schema-version=1.0)
Oct 08 16:24:06 compute-0 nova_compute[117413]: 2025-10-08 16:24:06.865 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:24:10 compute-0 nova_compute[117413]: 2025-10-08 16:24:10.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:24:10 compute-0 nova_compute[117413]: 2025-10-08 16:24:10.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Oct 08 16:24:10 compute-0 nova_compute[117413]: 2025-10-08 16:24:10.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:10 compute-0 podman[145292]: 2025-10-08 16:24:10.470008351 +0000 UTC m=+0.077253469 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, version=9.6, architecture=x86_64, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, name=ubi9-minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Oct 08 16:24:10 compute-0 nova_compute[117413]: 2025-10-08 16:24:10.875 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Oct 08 16:24:10 compute-0 nova_compute[117413]: 2025-10-08 16:24:10.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:11 compute-0 nova_compute[117413]: 2025-10-08 16:24:11.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:24:11 compute-0 nova_compute[117413]: 2025-10-08 16:24:11.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Oct 08 16:24:15 compute-0 nova_compute[117413]: 2025-10-08 16:24:15.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:15 compute-0 nova_compute[117413]: 2025-10-08 16:24:15.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:16 compute-0 podman[145313]: 2025-10-08 16:24:16.4387472 +0000 UTC m=+0.048149784 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2)
Oct 08 16:24:20 compute-0 nova_compute[117413]: 2025-10-08 16:24:20.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:20 compute-0 nova_compute[117413]: 2025-10-08 16:24:20.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:22 compute-0 podman[145333]: 2025-10-08 16:24:22.443690226 +0000 UTC m=+0.051181010 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 08 16:24:25 compute-0 nova_compute[117413]: 2025-10-08 16:24:25.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:25 compute-0 nova_compute[117413]: 2025-10-08 16:24:25.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:27 compute-0 podman[145353]: 2025-10-08 16:24:27.448658879 +0000 UTC m=+0.059965053 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:24:27 compute-0 podman[145354]: 2025-10-08 16:24:27.479168793 +0000 UTC m=+0.088575782 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest)
Oct 08 16:24:29 compute-0 podman[127881]: time="2025-10-08T16:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:24:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:24:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3018 "" "Go-http-client/1.1"
Oct 08 16:24:30 compute-0 nova_compute[117413]: 2025-10-08 16:24:30.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:30 compute-0 nova_compute[117413]: 2025-10-08 16:24:30.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:31 compute-0 openstack_network_exporter[130039]: ERROR   16:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:24:31 compute-0 openstack_network_exporter[130039]: ERROR   16:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:24:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:24:31 compute-0 openstack_network_exporter[130039]: ERROR   16:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:24:31 compute-0 openstack_network_exporter[130039]: ERROR   16:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:24:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:24:31 compute-0 openstack_network_exporter[130039]: ERROR   16:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:24:33 compute-0 nova_compute[117413]: 2025-10-08 16:24:33.976 2 DEBUG oslo_concurrency.lockutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "50b0b920-cb3d-445e-8a86-8b36faf27091" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:24:33 compute-0 nova_compute[117413]: 2025-10-08 16:24:33.977 2 DEBUG oslo_concurrency.lockutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "50b0b920-cb3d-445e-8a86-8b36faf27091" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:24:34 compute-0 nova_compute[117413]: 2025-10-08 16:24:34.640 2 DEBUG nova.compute.manager [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 08 16:24:35 compute-0 nova_compute[117413]: 2025-10-08 16:24:35.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:36 compute-0 nova_compute[117413]: 2025-10-08 16:24:36.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:36 compute-0 nova_compute[117413]: 2025-10-08 16:24:36.008 2 DEBUG oslo_concurrency.lockutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:24:36 compute-0 nova_compute[117413]: 2025-10-08 16:24:36.008 2 DEBUG oslo_concurrency.lockutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:24:36 compute-0 nova_compute[117413]: 2025-10-08 16:24:36.016 2 DEBUG nova.virt.hardware [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 08 16:24:36 compute-0 nova_compute[117413]: 2025-10-08 16:24:36.016 2 INFO nova.compute.claims [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Claim successful on node compute-0.ctlplane.example.com
Oct 08 16:24:37 compute-0 nova_compute[117413]: 2025-10-08 16:24:37.303 2 DEBUG nova.compute.provider_tree [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:24:37 compute-0 podman[145401]: 2025-10-08 16:24:37.462629847 +0000 UTC m=+0.059320504 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007)
Oct 08 16:24:37 compute-0 nova_compute[117413]: 2025-10-08 16:24:37.834 2 DEBUG nova.scheduler.client.report [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:24:38 compute-0 nova_compute[117413]: 2025-10-08 16:24:38.362 2 DEBUG oslo_concurrency.lockutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.353s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:24:38 compute-0 nova_compute[117413]: 2025-10-08 16:24:38.364 2 DEBUG nova.compute.manager [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 08 16:24:38 compute-0 nova_compute[117413]: 2025-10-08 16:24:38.889 2 DEBUG nova.compute.manager [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 08 16:24:38 compute-0 nova_compute[117413]: 2025-10-08 16:24:38.890 2 DEBUG nova.network.neutron [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 08 16:24:38 compute-0 nova_compute[117413]: 2025-10-08 16:24:38.890 2 WARNING neutronclient.v2_0.client [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:24:38 compute-0 nova_compute[117413]: 2025-10-08 16:24:38.891 2 WARNING neutronclient.v2_0.client [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:24:39 compute-0 nova_compute[117413]: 2025-10-08 16:24:39.424 2 INFO nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 08 16:24:39 compute-0 nova_compute[117413]: 2025-10-08 16:24:39.599 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:24:40 compute-0 nova_compute[117413]: 2025-10-08 16:24:40.013 2 DEBUG nova.compute.manager [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 08 16:24:40 compute-0 nova_compute[117413]: 2025-10-08 16:24:40.299 2 WARNING nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.
Oct 08 16:24:40 compute-0 nova_compute[117413]: 2025-10-08 16:24:40.300 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Triggering sync for uuid 50b0b920-cb3d-445e-8a86-8b36faf27091 _sync_power_states /usr/lib/python3.12/site-packages/nova/compute/manager.py:11020
Oct 08 16:24:40 compute-0 nova_compute[117413]: 2025-10-08 16:24:40.300 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "50b0b920-cb3d-445e-8a86-8b36faf27091" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:24:40 compute-0 nova_compute[117413]: 2025-10-08 16:24:40.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:41 compute-0 nova_compute[117413]: 2025-10-08 16:24:41.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:41 compute-0 podman[145421]: 2025-10-08 16:24:41.481182003 +0000 UTC m=+0.079370419 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, architecture=x86_64, vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 08 16:24:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:41.734 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:24:41 compute-0 nova_compute[117413]: 2025-10-08 16:24:41.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:41.735 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:24:41 compute-0 nova_compute[117413]: 2025-10-08 16:24:41.872 2 DEBUG nova.compute.manager [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 08 16:24:41 compute-0 nova_compute[117413]: 2025-10-08 16:24:41.874 2 DEBUG nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 08 16:24:41 compute-0 nova_compute[117413]: 2025-10-08 16:24:41.875 2 INFO nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Creating image(s)
Oct 08 16:24:41 compute-0 nova_compute[117413]: 2025-10-08 16:24:41.875 2 DEBUG oslo_concurrency.lockutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "/var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:24:41 compute-0 nova_compute[117413]: 2025-10-08 16:24:41.876 2 DEBUG oslo_concurrency.lockutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "/var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:24:41 compute-0 nova_compute[117413]: 2025-10-08 16:24:41.877 2 DEBUG oslo_concurrency.lockutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "/var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:24:41 compute-0 nova_compute[117413]: 2025-10-08 16:24:41.878 2 DEBUG oslo_utils.imageutils.format_inspector [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:24:41 compute-0 nova_compute[117413]: 2025-10-08 16:24:41.884 2 DEBUG oslo_utils.imageutils.format_inspector [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:24:41 compute-0 nova_compute[117413]: 2025-10-08 16:24:41.886 2 DEBUG oslo_concurrency.processutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:24:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:41.900 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:24:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:41.901 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:24:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:41.901 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:24:41 compute-0 nova_compute[117413]: 2025-10-08 16:24:41.944 2 DEBUG oslo_concurrency.processutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:24:41 compute-0 nova_compute[117413]: 2025-10-08 16:24:41.945 2 DEBUG oslo_concurrency.lockutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:24:41 compute-0 nova_compute[117413]: 2025-10-08 16:24:41.946 2 DEBUG oslo_concurrency.lockutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:24:41 compute-0 nova_compute[117413]: 2025-10-08 16:24:41.946 2 DEBUG oslo_utils.imageutils.format_inspector [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:24:41 compute-0 nova_compute[117413]: 2025-10-08 16:24:41.950 2 DEBUG oslo_utils.imageutils.format_inspector [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:24:41 compute-0 nova_compute[117413]: 2025-10-08 16:24:41.951 2 DEBUG oslo_concurrency.processutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:24:42 compute-0 nova_compute[117413]: 2025-10-08 16:24:42.007 2 DEBUG oslo_concurrency.processutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:24:42 compute-0 nova_compute[117413]: 2025-10-08 16:24:42.008 2 DEBUG oslo_concurrency.processutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:24:42 compute-0 nova_compute[117413]: 2025-10-08 16:24:42.031 2 DEBUG nova.network.neutron [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Successfully created port: 021d7c00-83a0-4211-a29f-23f96ad2535c _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 08 16:24:42 compute-0 nova_compute[117413]: 2025-10-08 16:24:42.050 2 DEBUG oslo_concurrency.processutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091/disk 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:24:42 compute-0 nova_compute[117413]: 2025-10-08 16:24:42.051 2 DEBUG oslo_concurrency.lockutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.105s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:24:42 compute-0 nova_compute[117413]: 2025-10-08 16:24:42.051 2 DEBUG oslo_concurrency.processutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:24:42 compute-0 nova_compute[117413]: 2025-10-08 16:24:42.110 2 DEBUG oslo_concurrency.processutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:24:42 compute-0 nova_compute[117413]: 2025-10-08 16:24:42.112 2 DEBUG nova.virt.disk.api [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Checking if we can resize image /var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:24:42 compute-0 nova_compute[117413]: 2025-10-08 16:24:42.113 2 DEBUG oslo_concurrency.processutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:24:42 compute-0 nova_compute[117413]: 2025-10-08 16:24:42.174 2 DEBUG oslo_concurrency.processutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:24:42 compute-0 nova_compute[117413]: 2025-10-08 16:24:42.176 2 DEBUG nova.virt.disk.api [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Cannot resize image /var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:24:42 compute-0 nova_compute[117413]: 2025-10-08 16:24:42.176 2 DEBUG nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 08 16:24:42 compute-0 nova_compute[117413]: 2025-10-08 16:24:42.177 2 DEBUG nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Ensure instance console log exists: /var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 08 16:24:42 compute-0 nova_compute[117413]: 2025-10-08 16:24:42.178 2 DEBUG oslo_concurrency.lockutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:24:42 compute-0 nova_compute[117413]: 2025-10-08 16:24:42.179 2 DEBUG oslo_concurrency.lockutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:24:42 compute-0 nova_compute[117413]: 2025-10-08 16:24:42.179 2 DEBUG oslo_concurrency.lockutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:24:44 compute-0 nova_compute[117413]: 2025-10-08 16:24:44.110 2 DEBUG nova.network.neutron [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Successfully updated port: 021d7c00-83a0-4211-a29f-23f96ad2535c _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 08 16:24:44 compute-0 nova_compute[117413]: 2025-10-08 16:24:44.373 2 DEBUG nova.compute.manager [req-41b12e2c-f2f7-41e6-993d-4944b6f0b98b req-1ca92934-cb1d-4aa3-90c5-1052978f81f4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Received event network-changed-021d7c00-83a0-4211-a29f-23f96ad2535c external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:24:44 compute-0 nova_compute[117413]: 2025-10-08 16:24:44.374 2 DEBUG nova.compute.manager [req-41b12e2c-f2f7-41e6-993d-4944b6f0b98b req-1ca92934-cb1d-4aa3-90c5-1052978f81f4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Refreshing instance network info cache due to event network-changed-021d7c00-83a0-4211-a29f-23f96ad2535c. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 08 16:24:44 compute-0 nova_compute[117413]: 2025-10-08 16:24:44.374 2 DEBUG oslo_concurrency.lockutils [req-41b12e2c-f2f7-41e6-993d-4944b6f0b98b req-1ca92934-cb1d-4aa3-90c5-1052978f81f4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-50b0b920-cb3d-445e-8a86-8b36faf27091" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:24:44 compute-0 nova_compute[117413]: 2025-10-08 16:24:44.374 2 DEBUG oslo_concurrency.lockutils [req-41b12e2c-f2f7-41e6-993d-4944b6f0b98b req-1ca92934-cb1d-4aa3-90c5-1052978f81f4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-50b0b920-cb3d-445e-8a86-8b36faf27091" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:24:44 compute-0 nova_compute[117413]: 2025-10-08 16:24:44.375 2 DEBUG nova.network.neutron [req-41b12e2c-f2f7-41e6-993d-4944b6f0b98b req-1ca92934-cb1d-4aa3-90c5-1052978f81f4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Refreshing network info cache for port 021d7c00-83a0-4211-a29f-23f96ad2535c _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 08 16:24:44 compute-0 nova_compute[117413]: 2025-10-08 16:24:44.628 2 DEBUG oslo_concurrency.lockutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "refresh_cache-50b0b920-cb3d-445e-8a86-8b36faf27091" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:24:44 compute-0 nova_compute[117413]: 2025-10-08 16:24:44.902 2 WARNING neutronclient.v2_0.client [req-41b12e2c-f2f7-41e6-993d-4944b6f0b98b req-1ca92934-cb1d-4aa3-90c5-1052978f81f4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:24:45 compute-0 nova_compute[117413]: 2025-10-08 16:24:45.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:45 compute-0 nova_compute[117413]: 2025-10-08 16:24:45.920 2 DEBUG nova.network.neutron [req-41b12e2c-f2f7-41e6-993d-4944b6f0b98b req-1ca92934-cb1d-4aa3-90c5-1052978f81f4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:24:46 compute-0 nova_compute[117413]: 2025-10-08 16:24:46.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:46 compute-0 nova_compute[117413]: 2025-10-08 16:24:46.493 2 DEBUG nova.network.neutron [req-41b12e2c-f2f7-41e6-993d-4944b6f0b98b req-1ca92934-cb1d-4aa3-90c5-1052978f81f4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:24:47 compute-0 nova_compute[117413]: 2025-10-08 16:24:47.040 2 DEBUG oslo_concurrency.lockutils [req-41b12e2c-f2f7-41e6-993d-4944b6f0b98b req-1ca92934-cb1d-4aa3-90c5-1052978f81f4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-50b0b920-cb3d-445e-8a86-8b36faf27091" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:24:47 compute-0 nova_compute[117413]: 2025-10-08 16:24:47.041 2 DEBUG oslo_concurrency.lockutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquired lock "refresh_cache-50b0b920-cb3d-445e-8a86-8b36faf27091" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:24:47 compute-0 nova_compute[117413]: 2025-10-08 16:24:47.041 2 DEBUG nova.network.neutron [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:24:47 compute-0 podman[145459]: 2025-10-08 16:24:47.441020686 +0000 UTC m=+0.048797032 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, config_id=iscsid, container_name=iscsid, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0)
Oct 08 16:24:47 compute-0 nova_compute[117413]: 2025-10-08 16:24:47.921 2 DEBUG nova.network.neutron [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.149 2 WARNING neutronclient.v2_0.client [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.396 2 DEBUG nova.network.neutron [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Updating instance_info_cache with network_info: [{"id": "021d7c00-83a0-4211-a29f-23f96ad2535c", "address": "fa:16:3e:54:e9:5e", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap021d7c00-83", "ovs_interfaceid": "021d7c00-83a0-4211-a29f-23f96ad2535c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.920 2 DEBUG oslo_concurrency.lockutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Releasing lock "refresh_cache-50b0b920-cb3d-445e-8a86-8b36faf27091" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.920 2 DEBUG nova.compute.manager [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Instance network_info: |[{"id": "021d7c00-83a0-4211-a29f-23f96ad2535c", "address": "fa:16:3e:54:e9:5e", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap021d7c00-83", "ovs_interfaceid": "021d7c00-83a0-4211-a29f-23f96ad2535c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.923 2 DEBUG nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Start _get_guest_xml network_info=[{"id": "021d7c00-83a0-4211-a29f-23f96ad2535c", "address": "fa:16:3e:54:e9:5e", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap021d7c00-83", "ovs_interfaceid": "021d7c00-83a0-4211-a29f-23f96ad2535c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'boot_index': 0, 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': '44390e9d-4b05-4916-9ba9-97b19c79ef43'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.927 2 WARNING nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.928 2 DEBUG nova.virt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='44390e9d-4b05-4916-9ba9-97b19c79ef43', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteHostMaintenanceStrategy-server-2035630336', uuid='50b0b920-cb3d-445e-8a86-8b36faf27091'), owner=OwnerMeta(userid='93b0b144b7494967bce532f29a6a5c53', username='tempest-TestExecuteHostMaintenanceStrategy-1649105137-project-admin', projectid='1820638f7dc1498db1dd11607c4370f2', projectname='tempest-TestExecuteHostMaintenanceStrategy-1649105137'), image=ImageMeta(id='44390e9d-4b05-4916-9ba9-97b19c79ef43', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='43cd5d45-bd07-4889-a671-dd23291090c1', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "021d7c00-83a0-4211-a29f-23f96ad2535c", "address": "fa:16:3e:54:e9:5e", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap021d7c00-83", "ovs_interfaceid": "021d7c00-83a0-4211-a29f-23f96ad2535c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008114656.23cad1d.el10', creation_time=1759940688.9289181) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.934 2 DEBUG nova.virt.libvirt.host [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.935 2 DEBUG nova.virt.libvirt.host [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.937 2 DEBUG nova.virt.libvirt.host [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.938 2 DEBUG nova.virt.libvirt.host [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.938 2 DEBUG nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.938 2 DEBUG nova.virt.hardware [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-08T16:08:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='43cd5d45-bd07-4889-a671-dd23291090c1',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.939 2 DEBUG nova.virt.hardware [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.939 2 DEBUG nova.virt.hardware [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.939 2 DEBUG nova.virt.hardware [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.939 2 DEBUG nova.virt.hardware [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.939 2 DEBUG nova.virt.hardware [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.940 2 DEBUG nova.virt.hardware [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.940 2 DEBUG nova.virt.hardware [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.940 2 DEBUG nova.virt.hardware [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.940 2 DEBUG nova.virt.hardware [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.940 2 DEBUG nova.virt.hardware [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.944 2 DEBUG nova.virt.libvirt.vif [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:24:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-2035630336',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-2035630336',id=13,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1820638f7dc1498db1dd11607c4370f2',ramdisk_id='',reservation_id='r-e58mlcox',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-1649105137',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-1649105137-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:24:40Z,user_data=None,user_id='93b0b144b7494967bce532f29a6a5c53',uuid=50b0b920-cb3d-445e-8a86-8b36faf27091,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "021d7c00-83a0-4211-a29f-23f96ad2535c", "address": "fa:16:3e:54:e9:5e", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap021d7c00-83", "ovs_interfaceid": "021d7c00-83a0-4211-a29f-23f96ad2535c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.944 2 DEBUG nova.network.os_vif_util [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Converting VIF {"id": "021d7c00-83a0-4211-a29f-23f96ad2535c", "address": "fa:16:3e:54:e9:5e", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap021d7c00-83", "ovs_interfaceid": "021d7c00-83a0-4211-a29f-23f96ad2535c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.945 2 DEBUG nova.network.os_vif_util [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:e9:5e,bridge_name='br-int',has_traffic_filtering=True,id=021d7c00-83a0-4211-a29f-23f96ad2535c,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap021d7c00-83') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:24:48 compute-0 nova_compute[117413]: 2025-10-08 16:24:48.946 2 DEBUG nova.objects.instance [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 50b0b920-cb3d-445e-8a86-8b36faf27091 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.477 2 DEBUG nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] End _get_guest_xml xml=<domain type="kvm">
Oct 08 16:24:49 compute-0 nova_compute[117413]:   <uuid>50b0b920-cb3d-445e-8a86-8b36faf27091</uuid>
Oct 08 16:24:49 compute-0 nova_compute[117413]:   <name>instance-0000000d</name>
Oct 08 16:24:49 compute-0 nova_compute[117413]:   <memory>131072</memory>
Oct 08 16:24:49 compute-0 nova_compute[117413]:   <vcpu>1</vcpu>
Oct 08 16:24:49 compute-0 nova_compute[117413]:   <metadata>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <nova:package version="32.1.0-0.20251008114656.23cad1d.el10"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <nova:name>tempest-TestExecuteHostMaintenanceStrategy-server-2035630336</nova:name>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <nova:creationTime>2025-10-08 16:24:48</nova:creationTime>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <nova:flavor name="m1.nano" id="43cd5d45-bd07-4889-a671-dd23291090c1">
Oct 08 16:24:49 compute-0 nova_compute[117413]:         <nova:memory>128</nova:memory>
Oct 08 16:24:49 compute-0 nova_compute[117413]:         <nova:disk>1</nova:disk>
Oct 08 16:24:49 compute-0 nova_compute[117413]:         <nova:swap>0</nova:swap>
Oct 08 16:24:49 compute-0 nova_compute[117413]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 16:24:49 compute-0 nova_compute[117413]:         <nova:vcpus>1</nova:vcpus>
Oct 08 16:24:49 compute-0 nova_compute[117413]:         <nova:extraSpecs>
Oct 08 16:24:49 compute-0 nova_compute[117413]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 08 16:24:49 compute-0 nova_compute[117413]:         </nova:extraSpecs>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       </nova:flavor>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <nova:image uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43">
Oct 08 16:24:49 compute-0 nova_compute[117413]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 08 16:24:49 compute-0 nova_compute[117413]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 08 16:24:49 compute-0 nova_compute[117413]:         <nova:minDisk>1</nova:minDisk>
Oct 08 16:24:49 compute-0 nova_compute[117413]:         <nova:minRam>0</nova:minRam>
Oct 08 16:24:49 compute-0 nova_compute[117413]:         <nova:properties>
Oct 08 16:24:49 compute-0 nova_compute[117413]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 08 16:24:49 compute-0 nova_compute[117413]:         </nova:properties>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       </nova:image>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <nova:owner>
Oct 08 16:24:49 compute-0 nova_compute[117413]:         <nova:user uuid="93b0b144b7494967bce532f29a6a5c53">tempest-TestExecuteHostMaintenanceStrategy-1649105137-project-admin</nova:user>
Oct 08 16:24:49 compute-0 nova_compute[117413]:         <nova:project uuid="1820638f7dc1498db1dd11607c4370f2">tempest-TestExecuteHostMaintenanceStrategy-1649105137</nova:project>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       </nova:owner>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <nova:root type="image" uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <nova:ports>
Oct 08 16:24:49 compute-0 nova_compute[117413]:         <nova:port uuid="021d7c00-83a0-4211-a29f-23f96ad2535c">
Oct 08 16:24:49 compute-0 nova_compute[117413]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:         </nova:port>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       </nova:ports>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     </nova:instance>
Oct 08 16:24:49 compute-0 nova_compute[117413]:   </metadata>
Oct 08 16:24:49 compute-0 nova_compute[117413]:   <sysinfo type="smbios">
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <system>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <entry name="manufacturer">RDO</entry>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <entry name="product">OpenStack Compute</entry>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <entry name="version">32.1.0-0.20251008114656.23cad1d.el10</entry>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <entry name="serial">50b0b920-cb3d-445e-8a86-8b36faf27091</entry>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <entry name="uuid">50b0b920-cb3d-445e-8a86-8b36faf27091</entry>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <entry name="family">Virtual Machine</entry>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     </system>
Oct 08 16:24:49 compute-0 nova_compute[117413]:   </sysinfo>
Oct 08 16:24:49 compute-0 nova_compute[117413]:   <os>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <boot dev="hd"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <smbios mode="sysinfo"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:   </os>
Oct 08 16:24:49 compute-0 nova_compute[117413]:   <features>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <acpi/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <apic/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <vmcoreinfo/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:   </features>
Oct 08 16:24:49 compute-0 nova_compute[117413]:   <clock offset="utc">
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <timer name="hpet" present="no"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:   </clock>
Oct 08 16:24:49 compute-0 nova_compute[117413]:   <cpu mode="host-model" match="exact">
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <topology sockets="1" cores="1" threads="1"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:24:49 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <disk type="file" device="disk">
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091/disk"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <target dev="vda" bus="virtio"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <disk type="file" device="cdrom">
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <driver name="qemu" type="raw" cache="none"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091/disk.config"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <target dev="sda" bus="sata"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <interface type="ethernet">
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <mac address="fa:16:3e:54:e9:5e"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <driver name="vhost" rx_queue_size="512"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <mtu size="1442"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <target dev="tap021d7c00-83"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     </interface>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <serial type="pty">
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091/console.log" append="off"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     </serial>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <video>
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     </video>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <input type="tablet" bus="usb"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <rng model="virtio">
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <backend model="random">/dev/urandom</backend>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <controller type="usb" index="0"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 08 16:24:49 compute-0 nova_compute[117413]:       <stats period="10"/>
Oct 08 16:24:49 compute-0 nova_compute[117413]:     </memballoon>
Oct 08 16:24:49 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:24:49 compute-0 nova_compute[117413]: </domain>
Oct 08 16:24:49 compute-0 nova_compute[117413]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.478 2 DEBUG nova.compute.manager [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Preparing to wait for external event network-vif-plugged-021d7c00-83a0-4211-a29f-23f96ad2535c prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.478 2 DEBUG oslo_concurrency.lockutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "50b0b920-cb3d-445e-8a86-8b36faf27091-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.478 2 DEBUG oslo_concurrency.lockutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "50b0b920-cb3d-445e-8a86-8b36faf27091-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.479 2 DEBUG oslo_concurrency.lockutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "50b0b920-cb3d-445e-8a86-8b36faf27091-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.479 2 DEBUG nova.virt.libvirt.vif [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:24:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-2035630336',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-2035630336',id=13,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1820638f7dc1498db1dd11607c4370f2',ramdisk_id='',reservation_id='r-e58mlcox',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-1649105137',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-1649105137-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:24:40Z,user_data=None,user_id='93b0b144b7494967bce532f29a6a5c53',uuid=50b0b920-cb3d-445e-8a86-8b36faf27091,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "021d7c00-83a0-4211-a29f-23f96ad2535c", "address": "fa:16:3e:54:e9:5e", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap021d7c00-83", "ovs_interfaceid": "021d7c00-83a0-4211-a29f-23f96ad2535c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.479 2 DEBUG nova.network.os_vif_util [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Converting VIF {"id": "021d7c00-83a0-4211-a29f-23f96ad2535c", "address": "fa:16:3e:54:e9:5e", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap021d7c00-83", "ovs_interfaceid": "021d7c00-83a0-4211-a29f-23f96ad2535c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.480 2 DEBUG nova.network.os_vif_util [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:e9:5e,bridge_name='br-int',has_traffic_filtering=True,id=021d7c00-83a0-4211-a29f-23f96ad2535c,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap021d7c00-83') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.480 2 DEBUG os_vif [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:e9:5e,bridge_name='br-int',has_traffic_filtering=True,id=021d7c00-83a0-4211-a29f-23f96ad2535c,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap021d7c00-83') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.481 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.481 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.482 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '897c352a-069c-5b79-9c5f-ae43dc87641c', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.487 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap021d7c00-83, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.487 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap021d7c00-83, col_values=(('qos', UUID('c04e722a-974d-412b-932f-499040c59e7c')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.488 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap021d7c00-83, col_values=(('external_ids', {'iface-id': '021d7c00-83a0-4211-a29f-23f96ad2535c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:e9:5e', 'vm-uuid': '50b0b920-cb3d-445e-8a86-8b36faf27091'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:49 compute-0 NetworkManager[1034]: <info>  [1759940689.4898] manager: (tap021d7c00-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:49 compute-0 nova_compute[117413]: 2025-10-08 16:24:49.496 2 INFO os_vif [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:e9:5e,bridge_name='br-int',has_traffic_filtering=True,id=021d7c00-83a0-4211-a29f-23f96ad2535c,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap021d7c00-83')
Oct 08 16:24:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:49.737 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:24:51 compute-0 nova_compute[117413]: 2025-10-08 16:24:51.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:51 compute-0 nova_compute[117413]: 2025-10-08 16:24:51.224 2 DEBUG nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:24:51 compute-0 nova_compute[117413]: 2025-10-08 16:24:51.225 2 DEBUG nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:24:51 compute-0 nova_compute[117413]: 2025-10-08 16:24:51.225 2 DEBUG nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] No VIF found with MAC fa:16:3e:54:e9:5e, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 08 16:24:51 compute-0 nova_compute[117413]: 2025-10-08 16:24:51.226 2 INFO nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Using config drive
Oct 08 16:24:51 compute-0 nova_compute[117413]: 2025-10-08 16:24:51.735 2 WARNING neutronclient.v2_0.client [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:24:51 compute-0 nova_compute[117413]: 2025-10-08 16:24:51.904 2 INFO nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Creating config drive at /var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091/disk.config
Oct 08 16:24:51 compute-0 nova_compute[117413]: 2025-10-08 16:24:51.909 2 DEBUG oslo_concurrency.processutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmpxew1cp86 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:24:52 compute-0 nova_compute[117413]: 2025-10-08 16:24:52.031 2 DEBUG oslo_concurrency.processutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmpxew1cp86" returned: 0 in 0.122s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:24:52 compute-0 kernel: tap021d7c00-83: entered promiscuous mode
Oct 08 16:24:52 compute-0 NetworkManager[1034]: <info>  [1759940692.0819] manager: (tap021d7c00-83): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Oct 08 16:24:52 compute-0 ovn_controller[19768]: 2025-10-08T16:24:52Z|00098|binding|INFO|Claiming lport 021d7c00-83a0-4211-a29f-23f96ad2535c for this chassis.
Oct 08 16:24:52 compute-0 ovn_controller[19768]: 2025-10-08T16:24:52Z|00099|binding|INFO|021d7c00-83a0-4211-a29f-23f96ad2535c: Claiming fa:16:3e:54:e9:5e 10.100.0.5
Oct 08 16:24:52 compute-0 nova_compute[117413]: 2025-10-08 16:24:52.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:52 compute-0 nova_compute[117413]: 2025-10-08 16:24:52.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:52 compute-0 nova_compute[117413]: 2025-10-08 16:24:52.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.098 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:e9:5e 10.100.0.5'], port_security=['fa:16:3e:54:e9:5e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '50b0b920-cb3d-445e-8a86-8b36faf27091', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1820638f7dc1498db1dd11607c4370f2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9aaea0fc-afb8-4aa4-827a-c3a5e7706faf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ac4acf5-70fa-4592-a711-3c63ae37ec88, chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=021d7c00-83a0-4211-a29f-23f96ad2535c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.098 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 021d7c00-83a0-4211-a29f-23f96ad2535c in datapath 56ad396c-4245-4eb9-9237-69e9ea6a760a bound to our chassis
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.099 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 56ad396c-4245-4eb9-9237-69e9ea6a760a
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.111 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[6e339ed1-dd1a-45cc-9b2e-ddbc90285ab6]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.111 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap56ad396c-41 in ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.112 139805 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap56ad396c-40 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.113 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[74131304-13e2-4dcb-9fcb-1c52f99317e5]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.114 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[4444b4c7-a694-4bb5-9bcc-906df50ee0f6]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:24:52 compute-0 systemd-udevd[145499]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.127 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[93794e4d-26e3-4fbc-ac6f-004a6f2cb9ca]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:24:52 compute-0 systemd-machined[77548]: New machine qemu-8-instance-0000000d.
Oct 08 16:24:52 compute-0 NetworkManager[1034]: <info>  [1759940692.1327] device (tap021d7c00-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:24:52 compute-0 NetworkManager[1034]: <info>  [1759940692.1340] device (tap021d7c00-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:24:52 compute-0 ovn_controller[19768]: 2025-10-08T16:24:52Z|00100|binding|INFO|Setting lport 021d7c00-83a0-4211-a29f-23f96ad2535c ovn-installed in OVS
Oct 08 16:24:52 compute-0 ovn_controller[19768]: 2025-10-08T16:24:52Z|00101|binding|INFO|Setting lport 021d7c00-83a0-4211-a29f-23f96ad2535c up in Southbound
Oct 08 16:24:52 compute-0 nova_compute[117413]: 2025-10-08 16:24:52.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.151 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[0f187075-353c-4b6c-ae69-f99a8924ab38]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:24:52 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-0000000d.
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.182 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[070e9ccd-b804-403f-bcb1-53a60f5d7fea]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.187 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[1612e7df-68cc-4a2d-98a4-961c73167031]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:24:52 compute-0 systemd-udevd[145504]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:24:52 compute-0 NetworkManager[1034]: <info>  [1759940692.1886] manager: (tap56ad396c-40): new Veth device (/org/freedesktop/NetworkManager/Devices/44)
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.224 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[4d688a9c-f92b-4585-9b9c-3c347fb41090]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.228 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[c8eb7080-692e-4ae1-a33c-6f87e20ff81e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:24:52 compute-0 NetworkManager[1034]: <info>  [1759940692.2501] device (tap56ad396c-40): carrier: link connected
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.259 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa5d37c-9e88-42c0-8e36-8725b30b246e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.277 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[18249bb1-f598-4242-af0f-1e598ee401dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap56ad396c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:36:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 198094, 'reachable_time': 21476, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 145532, 'error': None, 'target': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.294 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a28fd3-7b6e-4913-9983-1e556ce315dc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9d:36b7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 198094, 'tstamp': 198094}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 145533, 'error': None, 'target': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.312 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e17c53-865b-4747-8d67-b2f9d2425ed1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap56ad396c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:36:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 198094, 'reachable_time': 21476, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 145534, 'error': None, 'target': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.348 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[8689f90b-f576-4e6d-9a38-7b2f612ace88]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.416 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[306e37a0-ea4d-4448-b12f-3b4dbff5cc3e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.418 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56ad396c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.418 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.418 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap56ad396c-40, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:24:52 compute-0 NetworkManager[1034]: <info>  [1759940692.4736] manager: (tap56ad396c-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Oct 08 16:24:52 compute-0 nova_compute[117413]: 2025-10-08 16:24:52.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:52 compute-0 kernel: tap56ad396c-40: entered promiscuous mode
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.477 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap56ad396c-40, col_values=(('external_ids', {'iface-id': 'c11878dc-b81c-4cd4-8280-26645e84c0d9'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:24:52 compute-0 ovn_controller[19768]: 2025-10-08T16:24:52Z|00102|binding|INFO|Releasing lport c11878dc-b81c-4cd4-8280-26645e84c0d9 from this chassis (sb_readonly=0)
Oct 08 16:24:52 compute-0 nova_compute[117413]: 2025-10-08 16:24:52.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:52 compute-0 nova_compute[117413]: 2025-10-08 16:24:52.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.497 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[77b516e2-a17a-4857-91a0-ed18f2c8f130]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.498 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.498 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.498 28633 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 56ad396c-4245-4eb9-9237-69e9ea6a760a disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.498 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.498 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[147e920c-7ace-4716-ab36-a26ebaf320f4]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.499 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.499 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb62e81-8d30-4ee8-ad51-edefe2fad19f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.500 28633 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: global
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     log         /dev/log local0 debug
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     log-tag     haproxy-metadata-proxy-56ad396c-4245-4eb9-9237-69e9ea6a760a
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     user        root
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     group       root
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     maxconn     1024
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     pidfile     /var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     daemon
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: defaults
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     log global
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     mode http
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     option httplog
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     option dontlognull
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     option http-server-close
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     option forwardfor
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     retries                 3
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     timeout http-request    30s
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     timeout connect         30s
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     timeout client          32s
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     timeout server          32s
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     timeout http-keep-alive 30s
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: listen listener
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     bind 169.254.169.254:80
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     server metadata /var/lib/neutron/metadata_proxy
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:     http-request add-header X-OVN-Network-ID 56ad396c-4245-4eb9-9237-69e9ea6a760a
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 08 16:24:52 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:24:52.500 28633 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'env', 'PROCESS_TAG=haproxy-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/56ad396c-4245-4eb9-9237-69e9ea6a760a.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 08 16:24:52 compute-0 podman[145566]: 2025-10-08 16:24:52.883071193 +0000 UTC m=+0.054258468 container create 56e1f8c867891ca9e4a8b9b029299d0347d627b35719061f7c831f533bd91233 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4)
Oct 08 16:24:52 compute-0 systemd[1]: Started libpod-conmon-56e1f8c867891ca9e4a8b9b029299d0347d627b35719061f7c831f533bd91233.scope.
Oct 08 16:24:52 compute-0 podman[145566]: 2025-10-08 16:24:52.856064338 +0000 UTC m=+0.027251643 image pull 1b705be0a2473f9551d4f3571c1e8fc1b0bd84e013684239de53078e70a4b6e3 38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 08 16:24:52 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:24:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65ec1d34df830d9ae3f30941a6af6c871464e2b7443d307a1baf05e872cc6753/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 08 16:24:52 compute-0 podman[145566]: 2025-10-08 16:24:52.975783355 +0000 UTC m=+0.146970650 container init 56e1f8c867891ca9e4a8b9b029299d0347d627b35719061f7c831f533bd91233 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest)
Oct 08 16:24:52 compute-0 podman[145566]: 2025-10-08 16:24:52.983969469 +0000 UTC m=+0.155156744 container start 56e1f8c867891ca9e4a8b9b029299d0347d627b35719061f7c831f533bd91233 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_managed=true, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS)
Oct 08 16:24:52 compute-0 podman[145582]: 2025-10-08 16:24:52.992185185 +0000 UTC m=+0.061253179 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 08 16:24:53 compute-0 neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a[145583]: [NOTICE]   (145603) : New worker (145605) forked
Oct 08 16:24:53 compute-0 neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a[145583]: [NOTICE]   (145603) : Loading success.
Oct 08 16:24:53 compute-0 nova_compute[117413]: 2025-10-08 16:24:53.032 2 DEBUG nova.compute.manager [req-2afcf843-244c-40b0-9616-89de85c1797d req-403ab609-c3cd-4bf3-898b-b3bcc719ae2f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Received event network-vif-plugged-021d7c00-83a0-4211-a29f-23f96ad2535c external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:24:53 compute-0 nova_compute[117413]: 2025-10-08 16:24:53.033 2 DEBUG oslo_concurrency.lockutils [req-2afcf843-244c-40b0-9616-89de85c1797d req-403ab609-c3cd-4bf3-898b-b3bcc719ae2f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "50b0b920-cb3d-445e-8a86-8b36faf27091-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:24:53 compute-0 nova_compute[117413]: 2025-10-08 16:24:53.033 2 DEBUG oslo_concurrency.lockutils [req-2afcf843-244c-40b0-9616-89de85c1797d req-403ab609-c3cd-4bf3-898b-b3bcc719ae2f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "50b0b920-cb3d-445e-8a86-8b36faf27091-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:24:53 compute-0 nova_compute[117413]: 2025-10-08 16:24:53.033 2 DEBUG oslo_concurrency.lockutils [req-2afcf843-244c-40b0-9616-89de85c1797d req-403ab609-c3cd-4bf3-898b-b3bcc719ae2f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "50b0b920-cb3d-445e-8a86-8b36faf27091-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:24:53 compute-0 nova_compute[117413]: 2025-10-08 16:24:53.034 2 DEBUG nova.compute.manager [req-2afcf843-244c-40b0-9616-89de85c1797d req-403ab609-c3cd-4bf3-898b-b3bcc719ae2f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Processing event network-vif-plugged-021d7c00-83a0-4211-a29f-23f96ad2535c _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 08 16:24:53 compute-0 nova_compute[117413]: 2025-10-08 16:24:53.825 2 DEBUG nova.compute.manager [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 08 16:24:53 compute-0 nova_compute[117413]: 2025-10-08 16:24:53.829 2 DEBUG nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 08 16:24:53 compute-0 nova_compute[117413]: 2025-10-08 16:24:53.832 2 INFO nova.virt.libvirt.driver [-] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Instance spawned successfully.
Oct 08 16:24:53 compute-0 nova_compute[117413]: 2025-10-08 16:24:53.833 2 DEBUG nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 08 16:24:54 compute-0 nova_compute[117413]: 2025-10-08 16:24:54.343 2 DEBUG nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:24:54 compute-0 nova_compute[117413]: 2025-10-08 16:24:54.344 2 DEBUG nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:24:54 compute-0 nova_compute[117413]: 2025-10-08 16:24:54.344 2 DEBUG nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:24:54 compute-0 nova_compute[117413]: 2025-10-08 16:24:54.345 2 DEBUG nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:24:54 compute-0 nova_compute[117413]: 2025-10-08 16:24:54.345 2 DEBUG nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:24:54 compute-0 nova_compute[117413]: 2025-10-08 16:24:54.346 2 DEBUG nova.virt.libvirt.driver [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:24:54 compute-0 nova_compute[117413]: 2025-10-08 16:24:54.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:54 compute-0 nova_compute[117413]: 2025-10-08 16:24:54.854 2 INFO nova.compute.manager [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Took 12.98 seconds to spawn the instance on the hypervisor.
Oct 08 16:24:54 compute-0 nova_compute[117413]: 2025-10-08 16:24:54.855 2 DEBUG nova.compute.manager [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:24:55 compute-0 nova_compute[117413]: 2025-10-08 16:24:55.077 2 DEBUG nova.compute.manager [req-5f56ab49-1907-4c04-b5cd-356262dd1383 req-1880eef7-1679-42aa-9929-833e0939fa59 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Received event network-vif-plugged-021d7c00-83a0-4211-a29f-23f96ad2535c external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:24:55 compute-0 nova_compute[117413]: 2025-10-08 16:24:55.078 2 DEBUG oslo_concurrency.lockutils [req-5f56ab49-1907-4c04-b5cd-356262dd1383 req-1880eef7-1679-42aa-9929-833e0939fa59 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "50b0b920-cb3d-445e-8a86-8b36faf27091-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:24:55 compute-0 nova_compute[117413]: 2025-10-08 16:24:55.078 2 DEBUG oslo_concurrency.lockutils [req-5f56ab49-1907-4c04-b5cd-356262dd1383 req-1880eef7-1679-42aa-9929-833e0939fa59 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "50b0b920-cb3d-445e-8a86-8b36faf27091-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:24:55 compute-0 nova_compute[117413]: 2025-10-08 16:24:55.078 2 DEBUG oslo_concurrency.lockutils [req-5f56ab49-1907-4c04-b5cd-356262dd1383 req-1880eef7-1679-42aa-9929-833e0939fa59 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "50b0b920-cb3d-445e-8a86-8b36faf27091-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:24:55 compute-0 nova_compute[117413]: 2025-10-08 16:24:55.079 2 DEBUG nova.compute.manager [req-5f56ab49-1907-4c04-b5cd-356262dd1383 req-1880eef7-1679-42aa-9929-833e0939fa59 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] No waiting events found dispatching network-vif-plugged-021d7c00-83a0-4211-a29f-23f96ad2535c pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:24:55 compute-0 nova_compute[117413]: 2025-10-08 16:24:55.079 2 WARNING nova.compute.manager [req-5f56ab49-1907-4c04-b5cd-356262dd1383 req-1880eef7-1679-42aa-9929-833e0939fa59 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Received unexpected event network-vif-plugged-021d7c00-83a0-4211-a29f-23f96ad2535c for instance with vm_state active and task_state None.
Oct 08 16:24:55 compute-0 nova_compute[117413]: 2025-10-08 16:24:55.378 2 INFO nova.compute.manager [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Took 19.60 seconds to build instance.
Oct 08 16:24:55 compute-0 nova_compute[117413]: 2025-10-08 16:24:55.884 2 DEBUG oslo_concurrency.lockutils [None req-29afa593-be64-4d61-b9d9-c600680b7c53 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "50b0b920-cb3d-445e-8a86-8b36faf27091" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.907s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:24:55 compute-0 nova_compute[117413]: 2025-10-08 16:24:55.884 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "50b0b920-cb3d-445e-8a86-8b36faf27091" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 15.584s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:24:55 compute-0 nova_compute[117413]: 2025-10-08 16:24:55.884 2 INFO nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] During sync_power_state the instance has a pending task (networking). Skip.
Oct 08 16:24:55 compute-0 nova_compute[117413]: 2025-10-08 16:24:55.885 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "50b0b920-cb3d-445e-8a86-8b36faf27091" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:24:56 compute-0 nova_compute[117413]: 2025-10-08 16:24:56.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:58 compute-0 podman[145622]: 2025-10-08 16:24:58.443652835 +0000 UTC m=+0.054204097 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 08 16:24:58 compute-0 podman[145623]: 2025-10-08 16:24:58.496629555 +0000 UTC m=+0.103407649 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 08 16:24:59 compute-0 nova_compute[117413]: 2025-10-08 16:24:59.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:24:59 compute-0 podman[127881]: time="2025-10-08T16:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:24:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:24:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3489 "" "Go-http-client/1.1"
Oct 08 16:25:00 compute-0 nova_compute[117413]: 2025-10-08 16:25:00.058 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:25:00 compute-0 nova_compute[117413]: 2025-10-08 16:25:00.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:25:00 compute-0 nova_compute[117413]: 2025-10-08 16:25:00.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:25:00 compute-0 nova_compute[117413]: 2025-10-08 16:25:00.878 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:25:00 compute-0 nova_compute[117413]: 2025-10-08 16:25:00.879 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:25:00 compute-0 nova_compute[117413]: 2025-10-08 16:25:00.879 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:25:00 compute-0 nova_compute[117413]: 2025-10-08 16:25:00.879 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:25:01 compute-0 nova_compute[117413]: 2025-10-08 16:25:01.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:01 compute-0 openstack_network_exporter[130039]: ERROR   16:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:25:01 compute-0 openstack_network_exporter[130039]: ERROR   16:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:25:01 compute-0 openstack_network_exporter[130039]: ERROR   16:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:25:01 compute-0 openstack_network_exporter[130039]: ERROR   16:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:25:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:25:01 compute-0 openstack_network_exporter[130039]: ERROR   16:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:25:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:25:01 compute-0 nova_compute[117413]: 2025-10-08 16:25:01.944 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:25:01 compute-0 nova_compute[117413]: 2025-10-08 16:25:01.998 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:25:01 compute-0 nova_compute[117413]: 2025-10-08 16:25:01.999 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:25:02 compute-0 nova_compute[117413]: 2025-10-08 16:25:02.053 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:25:02 compute-0 nova_compute[117413]: 2025-10-08 16:25:02.179 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:25:02 compute-0 nova_compute[117413]: 2025-10-08 16:25:02.180 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:25:02 compute-0 nova_compute[117413]: 2025-10-08 16:25:02.197 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.017s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:25:02 compute-0 nova_compute[117413]: 2025-10-08 16:25:02.199 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6017MB free_disk=73.25395202636719GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:25:02 compute-0 nova_compute[117413]: 2025-10-08 16:25:02.199 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:25:02 compute-0 nova_compute[117413]: 2025-10-08 16:25:02.199 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:25:03 compute-0 nova_compute[117413]: 2025-10-08 16:25:03.312 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance 50b0b920-cb3d-445e-8a86-8b36faf27091 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:25:03 compute-0 nova_compute[117413]: 2025-10-08 16:25:03.313 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:25:03 compute-0 nova_compute[117413]: 2025-10-08 16:25:03.313 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:25:02 up 33 min,  0 user,  load average: 0.30, 0.22, 0.26\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_1820638f7dc1498db1dd11607c4370f2': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:25:03 compute-0 nova_compute[117413]: 2025-10-08 16:25:03.357 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing inventories for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Oct 08 16:25:03 compute-0 nova_compute[117413]: 2025-10-08 16:25:03.397 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating ProviderTree inventory for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Oct 08 16:25:03 compute-0 nova_compute[117413]: 2025-10-08 16:25:03.398 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating inventory in ProviderTree for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 08 16:25:03 compute-0 nova_compute[117413]: 2025-10-08 16:25:03.408 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing aggregate associations for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Oct 08 16:25:03 compute-0 nova_compute[117413]: 2025-10-08 16:25:03.427 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing trait associations for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8, traits: HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIRTIO_PACKED,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI2,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_SB16,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_ARCH_X86_64,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_SOUND_MODEL_ES1370,HW_ARCH_X86_64,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_VIRTIO_FS,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_TIS,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,HW_CPU_X86_MMX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_IGB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOUND_MODEL_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_SOUND_MODEL_AC97,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ADDRESS_SPACE_EMULATED,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOUND_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_CRB,HW_CPU_X86_SSE42,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_STORAGE_BUS_IDE _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Oct 08 16:25:03 compute-0 nova_compute[117413]: 2025-10-08 16:25:03.459 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:25:03 compute-0 nova_compute[117413]: 2025-10-08 16:25:03.968 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:25:04 compute-0 nova_compute[117413]: 2025-10-08 16:25:04.480 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:25:04 compute-0 nova_compute[117413]: 2025-10-08 16:25:04.480 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.281s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:25:04 compute-0 nova_compute[117413]: 2025-10-08 16:25:04.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:05 compute-0 ovn_controller[19768]: 2025-10-08T16:25:05Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:54:e9:5e 10.100.0.5
Oct 08 16:25:05 compute-0 ovn_controller[19768]: 2025-10-08T16:25:05Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:e9:5e 10.100.0.5
Oct 08 16:25:06 compute-0 nova_compute[117413]: 2025-10-08 16:25:06.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:06 compute-0 nova_compute[117413]: 2025-10-08 16:25:06.479 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:25:06 compute-0 nova_compute[117413]: 2025-10-08 16:25:06.480 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:25:06 compute-0 nova_compute[117413]: 2025-10-08 16:25:06.480 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:25:06 compute-0 nova_compute[117413]: 2025-10-08 16:25:06.480 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:25:06 compute-0 nova_compute[117413]: 2025-10-08 16:25:06.480 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:25:06 compute-0 nova_compute[117413]: 2025-10-08 16:25:06.480 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:25:07 compute-0 nova_compute[117413]: 2025-10-08 16:25:07.295 2 DEBUG nova.virt.libvirt.driver [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Creating tmpfile /var/lib/nova/instances/tmp68aee_uv to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10944
Oct 08 16:25:07 compute-0 nova_compute[117413]: 2025-10-08 16:25:07.296 2 WARNING neutronclient.v2_0.client [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:25:07 compute-0 nova_compute[117413]: 2025-10-08 16:25:07.299 2 DEBUG nova.compute.manager [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp68aee_uv',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.12/site-packages/nova/compute/manager.py:9086
Oct 08 16:25:08 compute-0 podman[145695]: 2025-10-08 16:25:08.436837546 +0000 UTC m=+0.049089570 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 08 16:25:09 compute-0 nova_compute[117413]: 2025-10-08 16:25:09.331 2 WARNING neutronclient.v2_0.client [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:25:09 compute-0 nova_compute[117413]: 2025-10-08 16:25:09.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:11 compute-0 nova_compute[117413]: 2025-10-08 16:25:11.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:12 compute-0 podman[145715]: 2025-10-08 16:25:12.44278939 +0000 UTC m=+0.053311951 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6)
Oct 08 16:25:13 compute-0 nova_compute[117413]: 2025-10-08 16:25:13.721 2 DEBUG nova.compute.manager [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp68aee_uv',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9311
Oct 08 16:25:14 compute-0 nova_compute[117413]: 2025-10-08 16:25:14.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:14 compute-0 nova_compute[117413]: 2025-10-08 16:25:14.736 2 DEBUG oslo_concurrency.lockutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:25:14 compute-0 nova_compute[117413]: 2025-10-08 16:25:14.736 2 DEBUG oslo_concurrency.lockutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:25:14 compute-0 nova_compute[117413]: 2025-10-08 16:25:14.737 2 DEBUG nova.network.neutron [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:25:15 compute-0 nova_compute[117413]: 2025-10-08 16:25:15.242 2 WARNING neutronclient.v2_0.client [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:25:16 compute-0 nova_compute[117413]: 2025-10-08 16:25:16.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:16 compute-0 nova_compute[117413]: 2025-10-08 16:25:16.194 2 WARNING neutronclient.v2_0.client [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:25:16 compute-0 nova_compute[117413]: 2025-10-08 16:25:16.482 2 DEBUG nova.network.neutron [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Updating instance_info_cache with network_info: [{"id": "994723e7-3afc-41b5-974c-373e8264e392", "address": "fa:16:3e:24:98:32", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994723e7-3a", "ovs_interfaceid": "994723e7-3afc-41b5-974c-373e8264e392", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:25:16 compute-0 nova_compute[117413]: 2025-10-08 16:25:16.989 2 DEBUG oslo_concurrency.lockutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.002 2 DEBUG nova.virt.libvirt.driver [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp68aee_uv',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11737
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.003 2 DEBUG nova.virt.libvirt.driver [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Creating instance directory: /var/lib/nova/instances/4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11750
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.003 2 DEBUG nova.virt.libvirt.driver [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Creating disk.info with the contents: {'/var/lib/nova/instances/4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae/disk': 'qcow2', '/var/lib/nova/instances/4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae/disk.config': 'raw'} pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11764
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.004 2 DEBUG nova.virt.libvirt.driver [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11774
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.004 2 DEBUG nova.objects.instance [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.514 2 DEBUG oslo_utils.imageutils.format_inspector [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.518 2 DEBUG oslo_utils.imageutils.format_inspector [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.519 2 DEBUG oslo_concurrency.processutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.583 2 DEBUG oslo_concurrency.processutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.584 2 DEBUG oslo_concurrency.lockutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.585 2 DEBUG oslo_concurrency.lockutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.586 2 DEBUG oslo_utils.imageutils.format_inspector [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.589 2 DEBUG oslo_utils.imageutils.format_inspector [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.590 2 DEBUG oslo_concurrency.processutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.648 2 DEBUG oslo_concurrency.processutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.649 2 DEBUG oslo_concurrency.processutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.690 2 DEBUG oslo_concurrency.processutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae/disk 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.691 2 DEBUG oslo_concurrency.lockutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.106s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.692 2 DEBUG oslo_concurrency.processutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.743 2 DEBUG oslo_concurrency.processutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.744 2 DEBUG nova.virt.disk.api [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Checking if we can resize image /var/lib/nova/instances/4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.745 2 DEBUG oslo_concurrency.processutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.803 2 DEBUG oslo_concurrency.processutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.804 2 DEBUG nova.virt.disk.api [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Cannot resize image /var/lib/nova/instances/4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:25:17 compute-0 nova_compute[117413]: 2025-10-08 16:25:17.804 2 DEBUG nova.objects.instance [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'migration_context' on Instance uuid 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.311 2 DEBUG nova.objects.base [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Object Instance<4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae> lazy-loaded attributes: trusted_certs,migration_context wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.312 2 DEBUG oslo_concurrency.processutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae/disk.config 497664 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.352 2 DEBUG oslo_concurrency.processutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae/disk.config 497664" returned: 0 in 0.040s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.353 2 DEBUG nova.virt.libvirt.driver [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11704
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.356 2 DEBUG nova.virt.libvirt.vif [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-08T16:24:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-94197034',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-94197034',id=12,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:24:26Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1820638f7dc1498db1dd11607c4370f2',ramdisk_id='',reservation_id='r-09cggcc0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-1649105137',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-1649105137-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:24:26Z,user_data=None,user_id='93b0b144b7494967bce532f29a6a5c53',uuid=4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "994723e7-3afc-41b5-974c-373e8264e392", "address": "fa:16:3e:24:98:32", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap994723e7-3a", "ovs_interfaceid": "994723e7-3afc-41b5-974c-373e8264e392", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.357 2 DEBUG nova.network.os_vif_util [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converting VIF {"id": "994723e7-3afc-41b5-974c-373e8264e392", "address": "fa:16:3e:24:98:32", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap994723e7-3a", "ovs_interfaceid": "994723e7-3afc-41b5-974c-373e8264e392", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.358 2 DEBUG nova.network.os_vif_util [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:98:32,bridge_name='br-int',has_traffic_filtering=True,id=994723e7-3afc-41b5-974c-373e8264e392,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap994723e7-3a') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.359 2 DEBUG os_vif [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:98:32,bridge_name='br-int',has_traffic_filtering=True,id=994723e7-3afc-41b5-974c-373e8264e392,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap994723e7-3a') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.361 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.362 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.364 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'd280750b-d363-56e5-8ccf-2ecb45bb1bd2', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.373 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap994723e7-3a, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.374 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap994723e7-3a, col_values=(('qos', UUID('bb40e1ca-e887-429c-8ee4-34ac2a09254d')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.375 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap994723e7-3a, col_values=(('external_ids', {'iface-id': '994723e7-3afc-41b5-974c-373e8264e392', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:98:32', 'vm-uuid': '4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:18 compute-0 NetworkManager[1034]: <info>  [1759940718.3774] manager: (tap994723e7-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.386 2 INFO os_vif [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:98:32,bridge_name='br-int',has_traffic_filtering=True,id=994723e7-3afc-41b5-974c-373e8264e392,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap994723e7-3a')
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.387 2 DEBUG nova.virt.libvirt.driver [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11851
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.388 2 DEBUG nova.compute.manager [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp68aee_uv',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9377
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.389 2 WARNING neutronclient.v2_0.client [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:25:18 compute-0 podman[145754]: 2025-10-08 16:25:18.472054295 +0000 UTC m=+0.069498116 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=iscsid, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 08 16:25:18 compute-0 nova_compute[117413]: 2025-10-08 16:25:18.918 2 WARNING neutronclient.v2_0.client [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:25:20 compute-0 nova_compute[117413]: 2025-10-08 16:25:20.066 2 DEBUG nova.network.neutron [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Port 994723e7-3afc-41b5-974c-373e8264e392 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.12/site-packages/nova/network/neutron.py:356
Oct 08 16:25:20 compute-0 nova_compute[117413]: 2025-10-08 16:25:20.118 2 DEBUG nova.compute.manager [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp68aee_uv',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9443
Oct 08 16:25:21 compute-0 nova_compute[117413]: 2025-10-08 16:25:21.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:22 compute-0 ovn_controller[19768]: 2025-10-08T16:25:22Z|00103|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct 08 16:25:22 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 08 16:25:22 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 08 16:25:23 compute-0 kernel: tap994723e7-3a: entered promiscuous mode
Oct 08 16:25:23 compute-0 NetworkManager[1034]: <info>  [1759940723.0416] manager: (tap994723e7-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/47)
Oct 08 16:25:23 compute-0 nova_compute[117413]: 2025-10-08 16:25:23.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:23 compute-0 ovn_controller[19768]: 2025-10-08T16:25:23Z|00104|binding|INFO|Claiming lport 994723e7-3afc-41b5-974c-373e8264e392 for this additional chassis.
Oct 08 16:25:23 compute-0 ovn_controller[19768]: 2025-10-08T16:25:23Z|00105|binding|INFO|994723e7-3afc-41b5-974c-373e8264e392: Claiming fa:16:3e:24:98:32 10.100.0.4
Oct 08 16:25:23 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:23.053 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:98:32 10.100.0.4'], port_security=['fa:16:3e:24:98:32 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1820638f7dc1498db1dd11607c4370f2', 'neutron:revision_number': '10', 'neutron:security_group_ids': '9aaea0fc-afb8-4aa4-827a-c3a5e7706faf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ac4acf5-70fa-4592-a711-3c63ae37ec88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=994723e7-3afc-41b5-974c-373e8264e392) old=Port_Binding(additional_chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:25:23 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:23.054 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 994723e7-3afc-41b5-974c-373e8264e392 in datapath 56ad396c-4245-4eb9-9237-69e9ea6a760a unbound from our chassis
Oct 08 16:25:23 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:23.055 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 56ad396c-4245-4eb9-9237-69e9ea6a760a
Oct 08 16:25:23 compute-0 ovn_controller[19768]: 2025-10-08T16:25:23Z|00106|binding|INFO|Setting lport 994723e7-3afc-41b5-974c-373e8264e392 ovn-installed in OVS
Oct 08 16:25:23 compute-0 nova_compute[117413]: 2025-10-08 16:25:23.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:23 compute-0 nova_compute[117413]: 2025-10-08 16:25:23.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:23 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:23.074 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[2c0903c0-e4d9-4859-9ef4-20a37afc2ebe]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:23 compute-0 systemd-udevd[145823]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:25:23 compute-0 systemd-machined[77548]: New machine qemu-9-instance-0000000c.
Oct 08 16:25:23 compute-0 NetworkManager[1034]: <info>  [1759940723.0959] device (tap994723e7-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:25:23 compute-0 NetworkManager[1034]: <info>  [1759940723.0968] device (tap994723e7-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:25:23 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:23.110 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[b2049a53-741e-4127-8b81-2c5112461ae7]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:23 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-0000000c.
Oct 08 16:25:23 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:23.113 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[6936ffe5-0381-4dfc-aae5-ea903fa4d1ee]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:23 compute-0 podman[145802]: 2025-10-08 16:25:23.125148386 +0000 UTC m=+0.088785000 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Oct 08 16:25:23 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:23.149 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[ef92f44d-7937-483e-8c29-cd2132c90145]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:23 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:23.170 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[d1203835-3e1d-4a03-b4f3-738b43a7d366]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap56ad396c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:36:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 198094, 'reachable_time': 21476, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 145841, 'error': None, 'target': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:23 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:23.190 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[8351fc0e-47a0-44df-a386-2c54b641d7ee]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap56ad396c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 198106, 'tstamp': 198106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 145844, 'error': None, 'target': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap56ad396c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 198109, 'tstamp': 198109}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 145844, 'error': None, 'target': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:23 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:23.192 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56ad396c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:25:23 compute-0 nova_compute[117413]: 2025-10-08 16:25:23.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:23 compute-0 nova_compute[117413]: 2025-10-08 16:25:23.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:23 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:23.196 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap56ad396c-40, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:25:23 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:23.196 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:25:23 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:23.196 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap56ad396c-40, col_values=(('external_ids', {'iface-id': 'c11878dc-b81c-4cd4-8280-26645e84c0d9'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:25:23 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:23.197 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:25:23 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:23.198 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[79752392-d8c2-4127-8df2-2cfcff98295f]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-56ad396c-4245-4eb9-9237-69e9ea6a760a\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 56ad396c-4245-4eb9-9237-69e9ea6a760a\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:23 compute-0 nova_compute[117413]: 2025-10-08 16:25:23.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:25 compute-0 ovn_controller[19768]: 2025-10-08T16:25:25Z|00107|binding|INFO|Claiming lport 994723e7-3afc-41b5-974c-373e8264e392 for this chassis.
Oct 08 16:25:25 compute-0 ovn_controller[19768]: 2025-10-08T16:25:25Z|00108|binding|INFO|994723e7-3afc-41b5-974c-373e8264e392: Claiming fa:16:3e:24:98:32 10.100.0.4
Oct 08 16:25:25 compute-0 ovn_controller[19768]: 2025-10-08T16:25:25Z|00109|binding|INFO|Setting lport 994723e7-3afc-41b5-974c-373e8264e392 up in Southbound
Oct 08 16:25:26 compute-0 nova_compute[117413]: 2025-10-08 16:25:26.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:26 compute-0 nova_compute[117413]: 2025-10-08 16:25:26.558 2 INFO nova.compute.manager [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Post operation of migration started
Oct 08 16:25:26 compute-0 nova_compute[117413]: 2025-10-08 16:25:26.559 2 WARNING neutronclient.v2_0.client [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:25:26 compute-0 nova_compute[117413]: 2025-10-08 16:25:26.642 2 WARNING neutronclient.v2_0.client [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:25:26 compute-0 nova_compute[117413]: 2025-10-08 16:25:26.643 2 WARNING neutronclient.v2_0.client [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:25:26 compute-0 nova_compute[117413]: 2025-10-08 16:25:26.736 2 DEBUG oslo_concurrency.lockutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:25:26 compute-0 nova_compute[117413]: 2025-10-08 16:25:26.736 2 DEBUG oslo_concurrency.lockutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:25:26 compute-0 nova_compute[117413]: 2025-10-08 16:25:26.737 2 DEBUG nova.network.neutron [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:25:27 compute-0 nova_compute[117413]: 2025-10-08 16:25:27.243 2 WARNING neutronclient.v2_0.client [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:25:28 compute-0 nova_compute[117413]: 2025-10-08 16:25:28.204 2 WARNING neutronclient.v2_0.client [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:25:28 compute-0 nova_compute[117413]: 2025-10-08 16:25:28.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:28 compute-0 nova_compute[117413]: 2025-10-08 16:25:28.926 2 DEBUG nova.network.neutron [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Updating instance_info_cache with network_info: [{"id": "994723e7-3afc-41b5-974c-373e8264e392", "address": "fa:16:3e:24:98:32", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994723e7-3a", "ovs_interfaceid": "994723e7-3afc-41b5-974c-373e8264e392", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:25:29 compute-0 nova_compute[117413]: 2025-10-08 16:25:29.433 2 DEBUG oslo_concurrency.lockutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:25:29 compute-0 podman[145866]: 2025-10-08 16:25:29.465091568 +0000 UTC m=+0.063685939 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:25:29 compute-0 podman[145867]: 2025-10-08 16:25:29.491589409 +0000 UTC m=+0.090605952 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 08 16:25:29 compute-0 podman[127881]: time="2025-10-08T16:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:25:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:25:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3493 "" "Go-http-client/1.1"
Oct 08 16:25:29 compute-0 nova_compute[117413]: 2025-10-08 16:25:29.953 2 DEBUG oslo_concurrency.lockutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:25:29 compute-0 nova_compute[117413]: 2025-10-08 16:25:29.954 2 DEBUG oslo_concurrency.lockutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:25:29 compute-0 nova_compute[117413]: 2025-10-08 16:25:29.954 2 DEBUG oslo_concurrency.lockutils [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:25:29 compute-0 nova_compute[117413]: 2025-10-08 16:25:29.960 2 INFO nova.virt.libvirt.driver [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 08 16:25:29 compute-0 virtqemud[117740]: Domain id=9 name='instance-0000000c' uuid=4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae is tainted: custom-monitor
Oct 08 16:25:30 compute-0 nova_compute[117413]: 2025-10-08 16:25:30.969 2 INFO nova.virt.libvirt.driver [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 08 16:25:31 compute-0 nova_compute[117413]: 2025-10-08 16:25:31.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:31 compute-0 openstack_network_exporter[130039]: ERROR   16:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:25:31 compute-0 openstack_network_exporter[130039]: ERROR   16:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:25:31 compute-0 openstack_network_exporter[130039]: ERROR   16:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:25:31 compute-0 openstack_network_exporter[130039]: ERROR   16:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:25:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:25:31 compute-0 openstack_network_exporter[130039]: ERROR   16:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:25:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:25:31 compute-0 nova_compute[117413]: 2025-10-08 16:25:31.977 2 INFO nova.virt.libvirt.driver [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 08 16:25:31 compute-0 nova_compute[117413]: 2025-10-08 16:25:31.983 2 DEBUG nova.compute.manager [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:25:32 compute-0 nova_compute[117413]: 2025-10-08 16:25:32.493 2 DEBUG nova.objects.instance [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Oct 08 16:25:33 compute-0 nova_compute[117413]: 2025-10-08 16:25:33.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:33 compute-0 nova_compute[117413]: 2025-10-08 16:25:33.512 2 WARNING neutronclient.v2_0.client [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:25:33 compute-0 nova_compute[117413]: 2025-10-08 16:25:33.917 2 WARNING neutronclient.v2_0.client [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:25:33 compute-0 nova_compute[117413]: 2025-10-08 16:25:33.918 2 WARNING neutronclient.v2_0.client [None req-321327b9-d441-4976-9935-7ced393eb6cf ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:25:36 compute-0 nova_compute[117413]: 2025-10-08 16:25:36.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:37 compute-0 nova_compute[117413]: 2025-10-08 16:25:37.749 2 DEBUG oslo_concurrency.lockutils [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "50b0b920-cb3d-445e-8a86-8b36faf27091" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:25:37 compute-0 nova_compute[117413]: 2025-10-08 16:25:37.750 2 DEBUG oslo_concurrency.lockutils [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "50b0b920-cb3d-445e-8a86-8b36faf27091" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:25:37 compute-0 nova_compute[117413]: 2025-10-08 16:25:37.750 2 DEBUG oslo_concurrency.lockutils [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "50b0b920-cb3d-445e-8a86-8b36faf27091-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:25:37 compute-0 nova_compute[117413]: 2025-10-08 16:25:37.751 2 DEBUG oslo_concurrency.lockutils [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "50b0b920-cb3d-445e-8a86-8b36faf27091-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:25:37 compute-0 nova_compute[117413]: 2025-10-08 16:25:37.751 2 DEBUG oslo_concurrency.lockutils [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "50b0b920-cb3d-445e-8a86-8b36faf27091-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:25:37 compute-0 nova_compute[117413]: 2025-10-08 16:25:37.766 2 INFO nova.compute.manager [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Terminating instance
Oct 08 16:25:38 compute-0 nova_compute[117413]: 2025-10-08 16:25:38.283 2 DEBUG nova.compute.manager [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:25:38 compute-0 kernel: tap021d7c00-83 (unregistering): left promiscuous mode
Oct 08 16:25:38 compute-0 NetworkManager[1034]: <info>  [1759940738.3082] device (tap021d7c00-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:25:38 compute-0 ovn_controller[19768]: 2025-10-08T16:25:38Z|00110|binding|INFO|Releasing lport 021d7c00-83a0-4211-a29f-23f96ad2535c from this chassis (sb_readonly=0)
Oct 08 16:25:38 compute-0 nova_compute[117413]: 2025-10-08 16:25:38.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:38 compute-0 ovn_controller[19768]: 2025-10-08T16:25:38Z|00111|binding|INFO|Setting lport 021d7c00-83a0-4211-a29f-23f96ad2535c down in Southbound
Oct 08 16:25:38 compute-0 ovn_controller[19768]: 2025-10-08T16:25:38Z|00112|binding|INFO|Removing iface tap021d7c00-83 ovn-installed in OVS
Oct 08 16:25:38 compute-0 nova_compute[117413]: 2025-10-08 16:25:38.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:38 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:38.324 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:e9:5e 10.100.0.5'], port_security=['fa:16:3e:54:e9:5e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '50b0b920-cb3d-445e-8a86-8b36faf27091', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1820638f7dc1498db1dd11607c4370f2', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9aaea0fc-afb8-4aa4-827a-c3a5e7706faf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ac4acf5-70fa-4592-a711-3c63ae37ec88, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=021d7c00-83a0-4211-a29f-23f96ad2535c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:25:38 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:38.325 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 021d7c00-83a0-4211-a29f-23f96ad2535c in datapath 56ad396c-4245-4eb9-9237-69e9ea6a760a unbound from our chassis
Oct 08 16:25:38 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:38.326 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 56ad396c-4245-4eb9-9237-69e9ea6a760a
Oct 08 16:25:38 compute-0 nova_compute[117413]: 2025-10-08 16:25:38.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:38 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:38.339 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[81291390-b515-42df-a5f8-e4cdcc725954]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:38 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:38.364 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[a01b1799-73ea-4aba-9df5-52336fe5e316]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:38 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:38.367 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[0b414707-ddcb-402c-95f6-f03e0ecea105]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:38 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct 08 16:25:38 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000d.scope: Consumed 14.119s CPU time.
Oct 08 16:25:38 compute-0 nova_compute[117413]: 2025-10-08 16:25:38.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:38 compute-0 systemd-machined[77548]: Machine qemu-8-instance-0000000d terminated.
Oct 08 16:25:38 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:38.392 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[075f5ddf-5b31-4673-9889-098f3795ccb0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:38 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:38.410 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[38d6dda1-bffc-4cfc-9d68-f89fdf9e2bd3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap56ad396c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:36:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 198094, 'reachable_time': 21476, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 145935, 'error': None, 'target': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:38 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:38.425 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[2b8d317d-2ba0-4757-a4d0-146198b9c4e4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap56ad396c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 198106, 'tstamp': 198106}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 145936, 'error': None, 'target': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap56ad396c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 198109, 'tstamp': 198109}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 145936, 'error': None, 'target': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:38 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:38.426 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56ad396c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:25:38 compute-0 nova_compute[117413]: 2025-10-08 16:25:38.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:38 compute-0 nova_compute[117413]: 2025-10-08 16:25:38.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:38 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:38.433 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap56ad396c-40, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:25:38 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:38.433 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:25:38 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:38.433 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap56ad396c-40, col_values=(('external_ids', {'iface-id': 'c11878dc-b81c-4cd4-8280-26645e84c0d9'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:25:38 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:38.433 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:25:38 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:38.434 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e8ebf7e7-e439-4efb-804d-9ae50fc7ee6a]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-56ad396c-4245-4eb9-9237-69e9ea6a760a\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 56ad396c-4245-4eb9-9237-69e9ea6a760a\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:38 compute-0 nova_compute[117413]: 2025-10-08 16:25:38.543 2 INFO nova.virt.libvirt.driver [-] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Instance destroyed successfully.
Oct 08 16:25:38 compute-0 nova_compute[117413]: 2025-10-08 16:25:38.543 2 DEBUG nova.objects.instance [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lazy-loading 'resources' on Instance uuid 50b0b920-cb3d-445e-8a86-8b36faf27091 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:25:38 compute-0 nova_compute[117413]: 2025-10-08 16:25:38.566 2 DEBUG nova.compute.manager [req-091b6ac8-9b54-4bdd-ad68-a4c0ece044bf req-717d4974-2812-4d09-a6ec-0d3a9e5e8718 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Received event network-vif-unplugged-021d7c00-83a0-4211-a29f-23f96ad2535c external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:25:38 compute-0 nova_compute[117413]: 2025-10-08 16:25:38.567 2 DEBUG oslo_concurrency.lockutils [req-091b6ac8-9b54-4bdd-ad68-a4c0ece044bf req-717d4974-2812-4d09-a6ec-0d3a9e5e8718 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "50b0b920-cb3d-445e-8a86-8b36faf27091-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:25:38 compute-0 nova_compute[117413]: 2025-10-08 16:25:38.567 2 DEBUG oslo_concurrency.lockutils [req-091b6ac8-9b54-4bdd-ad68-a4c0ece044bf req-717d4974-2812-4d09-a6ec-0d3a9e5e8718 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "50b0b920-cb3d-445e-8a86-8b36faf27091-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:25:38 compute-0 nova_compute[117413]: 2025-10-08 16:25:38.567 2 DEBUG oslo_concurrency.lockutils [req-091b6ac8-9b54-4bdd-ad68-a4c0ece044bf req-717d4974-2812-4d09-a6ec-0d3a9e5e8718 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "50b0b920-cb3d-445e-8a86-8b36faf27091-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:25:38 compute-0 nova_compute[117413]: 2025-10-08 16:25:38.567 2 DEBUG nova.compute.manager [req-091b6ac8-9b54-4bdd-ad68-a4c0ece044bf req-717d4974-2812-4d09-a6ec-0d3a9e5e8718 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] No waiting events found dispatching network-vif-unplugged-021d7c00-83a0-4211-a29f-23f96ad2535c pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:25:38 compute-0 nova_compute[117413]: 2025-10-08 16:25:38.568 2 DEBUG nova.compute.manager [req-091b6ac8-9b54-4bdd-ad68-a4c0ece044bf req-717d4974-2812-4d09-a6ec-0d3a9e5e8718 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Received event network-vif-unplugged-021d7c00-83a0-4211-a29f-23f96ad2535c for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:25:38 compute-0 podman[145955]: 2025-10-08 16:25:38.61594702 +0000 UTC m=+0.053688412 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Oct 08 16:25:38 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:38.622 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:25:38 compute-0 nova_compute[117413]: 2025-10-08 16:25:38.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:38 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:38.623 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:25:38 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:38.624 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:25:39 compute-0 nova_compute[117413]: 2025-10-08 16:25:39.050 2 DEBUG nova.virt.libvirt.vif [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-08T16:24:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-2035630336',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-2035630336',id=13,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:24:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1820638f7dc1498db1dd11607c4370f2',ramdisk_id='',reservation_id='r-e58mlcox',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-1649105137',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-1649105137-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:24:54Z,user_data=None,user_id='93b0b144b7494967bce532f29a6a5c53',uuid=50b0b920-cb3d-445e-8a86-8b36faf27091,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "021d7c00-83a0-4211-a29f-23f96ad2535c", "address": "fa:16:3e:54:e9:5e", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap021d7c00-83", "ovs_interfaceid": "021d7c00-83a0-4211-a29f-23f96ad2535c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:25:39 compute-0 nova_compute[117413]: 2025-10-08 16:25:39.051 2 DEBUG nova.network.os_vif_util [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Converting VIF {"id": "021d7c00-83a0-4211-a29f-23f96ad2535c", "address": "fa:16:3e:54:e9:5e", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap021d7c00-83", "ovs_interfaceid": "021d7c00-83a0-4211-a29f-23f96ad2535c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:25:39 compute-0 nova_compute[117413]: 2025-10-08 16:25:39.051 2 DEBUG nova.network.os_vif_util [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:e9:5e,bridge_name='br-int',has_traffic_filtering=True,id=021d7c00-83a0-4211-a29f-23f96ad2535c,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap021d7c00-83') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:25:39 compute-0 nova_compute[117413]: 2025-10-08 16:25:39.051 2 DEBUG os_vif [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:e9:5e,bridge_name='br-int',has_traffic_filtering=True,id=021d7c00-83a0-4211-a29f-23f96ad2535c,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap021d7c00-83') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:25:39 compute-0 nova_compute[117413]: 2025-10-08 16:25:39.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:39 compute-0 nova_compute[117413]: 2025-10-08 16:25:39.054 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap021d7c00-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:25:39 compute-0 nova_compute[117413]: 2025-10-08 16:25:39.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:39 compute-0 nova_compute[117413]: 2025-10-08 16:25:39.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:39 compute-0 nova_compute[117413]: 2025-10-08 16:25:39.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:39 compute-0 nova_compute[117413]: 2025-10-08 16:25:39.058 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=c04e722a-974d-412b-932f-499040c59e7c) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:25:39 compute-0 nova_compute[117413]: 2025-10-08 16:25:39.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:39 compute-0 nova_compute[117413]: 2025-10-08 16:25:39.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:39 compute-0 nova_compute[117413]: 2025-10-08 16:25:39.061 2 INFO os_vif [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:e9:5e,bridge_name='br-int',has_traffic_filtering=True,id=021d7c00-83a0-4211-a29f-23f96ad2535c,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap021d7c00-83')
Oct 08 16:25:39 compute-0 nova_compute[117413]: 2025-10-08 16:25:39.062 2 INFO nova.virt.libvirt.driver [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Deleting instance files /var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091_del
Oct 08 16:25:39 compute-0 nova_compute[117413]: 2025-10-08 16:25:39.063 2 INFO nova.virt.libvirt.driver [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Deletion of /var/lib/nova/instances/50b0b920-cb3d-445e-8a86-8b36faf27091_del complete
Oct 08 16:25:39 compute-0 nova_compute[117413]: 2025-10-08 16:25:39.576 2 INFO nova.compute.manager [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Took 1.29 seconds to destroy the instance on the hypervisor.
Oct 08 16:25:39 compute-0 nova_compute[117413]: 2025-10-08 16:25:39.578 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:25:39 compute-0 nova_compute[117413]: 2025-10-08 16:25:39.578 2 DEBUG nova.compute.manager [-] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:25:39 compute-0 nova_compute[117413]: 2025-10-08 16:25:39.578 2 DEBUG nova.network.neutron [-] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:25:39 compute-0 nova_compute[117413]: 2025-10-08 16:25:39.579 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:25:39 compute-0 nova_compute[117413]: 2025-10-08 16:25:39.916 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:25:40 compute-0 nova_compute[117413]: 2025-10-08 16:25:40.205 2 DEBUG nova.compute.manager [req-abba87aa-4cdc-4612-8b56-d5f9db47aa96 req-93a65042-d4ec-4994-8639-757a799b8d05 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Received event network-vif-deleted-021d7c00-83a0-4211-a29f-23f96ad2535c external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:25:40 compute-0 nova_compute[117413]: 2025-10-08 16:25:40.205 2 INFO nova.compute.manager [req-abba87aa-4cdc-4612-8b56-d5f9db47aa96 req-93a65042-d4ec-4994-8639-757a799b8d05 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Neutron deleted interface 021d7c00-83a0-4211-a29f-23f96ad2535c; detaching it from the instance and deleting it from the info cache
Oct 08 16:25:40 compute-0 nova_compute[117413]: 2025-10-08 16:25:40.205 2 DEBUG nova.network.neutron [req-abba87aa-4cdc-4612-8b56-d5f9db47aa96 req-93a65042-d4ec-4994-8639-757a799b8d05 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:25:40 compute-0 nova_compute[117413]: 2025-10-08 16:25:40.621 2 DEBUG nova.compute.manager [req-b542518b-1112-497d-ad7f-ded550233d04 req-a92efc1c-8f21-4bbd-964f-58817b34609a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Received event network-vif-unplugged-021d7c00-83a0-4211-a29f-23f96ad2535c external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:25:40 compute-0 nova_compute[117413]: 2025-10-08 16:25:40.622 2 DEBUG oslo_concurrency.lockutils [req-b542518b-1112-497d-ad7f-ded550233d04 req-a92efc1c-8f21-4bbd-964f-58817b34609a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "50b0b920-cb3d-445e-8a86-8b36faf27091-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:25:40 compute-0 nova_compute[117413]: 2025-10-08 16:25:40.622 2 DEBUG oslo_concurrency.lockutils [req-b542518b-1112-497d-ad7f-ded550233d04 req-a92efc1c-8f21-4bbd-964f-58817b34609a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "50b0b920-cb3d-445e-8a86-8b36faf27091-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:25:40 compute-0 nova_compute[117413]: 2025-10-08 16:25:40.622 2 DEBUG oslo_concurrency.lockutils [req-b542518b-1112-497d-ad7f-ded550233d04 req-a92efc1c-8f21-4bbd-964f-58817b34609a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "50b0b920-cb3d-445e-8a86-8b36faf27091-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:25:40 compute-0 nova_compute[117413]: 2025-10-08 16:25:40.622 2 DEBUG nova.compute.manager [req-b542518b-1112-497d-ad7f-ded550233d04 req-a92efc1c-8f21-4bbd-964f-58817b34609a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] No waiting events found dispatching network-vif-unplugged-021d7c00-83a0-4211-a29f-23f96ad2535c pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:25:40 compute-0 nova_compute[117413]: 2025-10-08 16:25:40.623 2 DEBUG nova.compute.manager [req-b542518b-1112-497d-ad7f-ded550233d04 req-a92efc1c-8f21-4bbd-964f-58817b34609a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Received event network-vif-unplugged-021d7c00-83a0-4211-a29f-23f96ad2535c for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:25:40 compute-0 nova_compute[117413]: 2025-10-08 16:25:40.667 2 DEBUG nova.network.neutron [-] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:25:40 compute-0 nova_compute[117413]: 2025-10-08 16:25:40.713 2 DEBUG nova.compute.manager [req-abba87aa-4cdc-4612-8b56-d5f9db47aa96 req-93a65042-d4ec-4994-8639-757a799b8d05 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Detach interface failed, port_id=021d7c00-83a0-4211-a29f-23f96ad2535c, reason: Instance 50b0b920-cb3d-445e-8a86-8b36faf27091 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 08 16:25:41 compute-0 nova_compute[117413]: 2025-10-08 16:25:41.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:41 compute-0 nova_compute[117413]: 2025-10-08 16:25:41.173 2 INFO nova.compute.manager [-] [instance: 50b0b920-cb3d-445e-8a86-8b36faf27091] Took 1.59 seconds to deallocate network for instance.
Oct 08 16:25:41 compute-0 nova_compute[117413]: 2025-10-08 16:25:41.704 2 DEBUG oslo_concurrency.lockutils [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:25:41 compute-0 nova_compute[117413]: 2025-10-08 16:25:41.705 2 DEBUG oslo_concurrency.lockutils [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:25:41 compute-0 nova_compute[117413]: 2025-10-08 16:25:41.757 2 DEBUG nova.compute.provider_tree [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:25:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:41.902 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:25:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:41.902 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:25:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:41.902 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:25:42 compute-0 nova_compute[117413]: 2025-10-08 16:25:42.264 2 DEBUG nova.scheduler.client.report [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:25:42 compute-0 nova_compute[117413]: 2025-10-08 16:25:42.778 2 DEBUG oslo_concurrency.lockutils [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.073s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:25:42 compute-0 nova_compute[117413]: 2025-10-08 16:25:42.805 2 INFO nova.scheduler.client.report [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Deleted allocations for instance 50b0b920-cb3d-445e-8a86-8b36faf27091
Oct 08 16:25:43 compute-0 podman[145978]: 2025-10-08 16:25:43.490858448 +0000 UTC m=+0.082845280 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, vcs-type=git)
Oct 08 16:25:43 compute-0 nova_compute[117413]: 2025-10-08 16:25:43.832 2 DEBUG oslo_concurrency.lockutils [None req-1c8acbf4-63f7-4def-8b42-71d31a3e82c6 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "50b0b920-cb3d-445e-8a86-8b36faf27091" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.082s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:25:44 compute-0 nova_compute[117413]: 2025-10-08 16:25:44.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:44 compute-0 nova_compute[117413]: 2025-10-08 16:25:44.677 2 DEBUG oslo_concurrency.lockutils [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:25:44 compute-0 nova_compute[117413]: 2025-10-08 16:25:44.679 2 DEBUG oslo_concurrency.lockutils [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:25:44 compute-0 nova_compute[117413]: 2025-10-08 16:25:44.680 2 DEBUG oslo_concurrency.lockutils [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:25:44 compute-0 nova_compute[117413]: 2025-10-08 16:25:44.681 2 DEBUG oslo_concurrency.lockutils [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:25:44 compute-0 nova_compute[117413]: 2025-10-08 16:25:44.681 2 DEBUG oslo_concurrency.lockutils [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:25:44 compute-0 nova_compute[117413]: 2025-10-08 16:25:44.694 2 INFO nova.compute.manager [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Terminating instance
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.208 2 DEBUG nova.compute.manager [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:25:45 compute-0 kernel: tap994723e7-3a (unregistering): left promiscuous mode
Oct 08 16:25:45 compute-0 NetworkManager[1034]: <info>  [1759940745.2335] device (tap994723e7-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:25:45 compute-0 ovn_controller[19768]: 2025-10-08T16:25:45Z|00113|binding|INFO|Releasing lport 994723e7-3afc-41b5-974c-373e8264e392 from this chassis (sb_readonly=0)
Oct 08 16:25:45 compute-0 ovn_controller[19768]: 2025-10-08T16:25:45Z|00114|binding|INFO|Setting lport 994723e7-3afc-41b5-974c-373e8264e392 down in Southbound
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:45 compute-0 ovn_controller[19768]: 2025-10-08T16:25:45Z|00115|binding|INFO|Removing iface tap994723e7-3a ovn-installed in OVS
Oct 08 16:25:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:45.252 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:98:32 10.100.0.4'], port_security=['fa:16:3e:24:98:32 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1820638f7dc1498db1dd11607c4370f2', 'neutron:revision_number': '15', 'neutron:security_group_ids': '9aaea0fc-afb8-4aa4-827a-c3a5e7706faf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ac4acf5-70fa-4592-a711-3c63ae37ec88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=994723e7-3afc-41b5-974c-373e8264e392) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:25:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:45.253 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 994723e7-3afc-41b5-974c-373e8264e392 in datapath 56ad396c-4245-4eb9-9237-69e9ea6a760a unbound from our chassis
Oct 08 16:25:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:45.256 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 56ad396c-4245-4eb9-9237-69e9ea6a760a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:25:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:45.258 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[050cf7e0-bdd8-499e-81fa-dc940070531e]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:45.259 28633 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a namespace which is not needed anymore
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:45 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct 08 16:25:45 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d0000000c.scope: Consumed 2.254s CPU time.
Oct 08 16:25:45 compute-0 systemd-machined[77548]: Machine qemu-9-instance-0000000c terminated.
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.391 2 DEBUG nova.compute.manager [req-8a8d34de-1562-4492-b7af-4ae70c95d05a req-cb07b4c9-0e2f-4dfa-bc95-aab5dd1f6f52 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Received event network-vif-unplugged-994723e7-3afc-41b5-974c-373e8264e392 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.391 2 DEBUG oslo_concurrency.lockutils [req-8a8d34de-1562-4492-b7af-4ae70c95d05a req-cb07b4c9-0e2f-4dfa-bc95-aab5dd1f6f52 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.392 2 DEBUG oslo_concurrency.lockutils [req-8a8d34de-1562-4492-b7af-4ae70c95d05a req-cb07b4c9-0e2f-4dfa-bc95-aab5dd1f6f52 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.392 2 DEBUG oslo_concurrency.lockutils [req-8a8d34de-1562-4492-b7af-4ae70c95d05a req-cb07b4c9-0e2f-4dfa-bc95-aab5dd1f6f52 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.392 2 DEBUG nova.compute.manager [req-8a8d34de-1562-4492-b7af-4ae70c95d05a req-cb07b4c9-0e2f-4dfa-bc95-aab5dd1f6f52 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] No waiting events found dispatching network-vif-unplugged-994723e7-3afc-41b5-974c-373e8264e392 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.392 2 DEBUG nova.compute.manager [req-8a8d34de-1562-4492-b7af-4ae70c95d05a req-cb07b4c9-0e2f-4dfa-bc95-aab5dd1f6f52 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Received event network-vif-unplugged-994723e7-3afc-41b5-974c-373e8264e392 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:25:45 compute-0 neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a[145583]: [NOTICE]   (145603) : haproxy version is 3.0.5-8e879a5
Oct 08 16:25:45 compute-0 neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a[145583]: [NOTICE]   (145603) : path to executable is /usr/sbin/haproxy
Oct 08 16:25:45 compute-0 neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a[145583]: [WARNING]  (145603) : Exiting Master process...
Oct 08 16:25:45 compute-0 podman[146025]: 2025-10-08 16:25:45.415880528 +0000 UTC m=+0.039213176 container kill 56e1f8c867891ca9e4a8b9b029299d0347d627b35719061f7c831f533bd91233 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4)
Oct 08 16:25:45 compute-0 neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a[145583]: [ALERT]    (145603) : Current worker (145605) exited with code 143 (Terminated)
Oct 08 16:25:45 compute-0 neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a[145583]: [WARNING]  (145603) : All workers exited. Exiting... (0)
Oct 08 16:25:45 compute-0 systemd[1]: libpod-56e1f8c867891ca9e4a8b9b029299d0347d627b35719061f7c831f533bd91233.scope: Deactivated successfully.
Oct 08 16:25:45 compute-0 podman[146042]: 2025-10-08 16:25:45.465200854 +0000 UTC m=+0.025955986 container died 56e1f8c867891ca9e4a8b9b029299d0347d627b35719061f7c831f533bd91233 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007)
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.472 2 INFO nova.virt.libvirt.driver [-] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Instance destroyed successfully.
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.473 2 DEBUG nova.objects.instance [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lazy-loading 'resources' on Instance uuid 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:25:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-56e1f8c867891ca9e4a8b9b029299d0347d627b35719061f7c831f533bd91233-userdata-shm.mount: Deactivated successfully.
Oct 08 16:25:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-65ec1d34df830d9ae3f30941a6af6c871464e2b7443d307a1baf05e872cc6753-merged.mount: Deactivated successfully.
Oct 08 16:25:45 compute-0 podman[146042]: 2025-10-08 16:25:45.517359331 +0000 UTC m=+0.078114423 container cleanup 56e1f8c867891ca9e4a8b9b029299d0347d627b35719061f7c831f533bd91233 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007)
Oct 08 16:25:45 compute-0 systemd[1]: libpod-conmon-56e1f8c867891ca9e4a8b9b029299d0347d627b35719061f7c831f533bd91233.scope: Deactivated successfully.
Oct 08 16:25:45 compute-0 podman[146045]: 2025-10-08 16:25:45.534088051 +0000 UTC m=+0.085391582 container remove 56e1f8c867891ca9e4a8b9b029299d0347d627b35719061f7c831f533bd91233 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Oct 08 16:25:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:45.540 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[055cf266-5cc4-4f27-be0b-7b66890a8214]: (4, ("Wed Oct  8 04:25:45 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a (56e1f8c867891ca9e4a8b9b029299d0347d627b35719061f7c831f533bd91233)\n56e1f8c867891ca9e4a8b9b029299d0347d627b35719061f7c831f533bd91233\nWed Oct  8 04:25:45 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a (56e1f8c867891ca9e4a8b9b029299d0347d627b35719061f7c831f533bd91233)\n56e1f8c867891ca9e4a8b9b029299d0347d627b35719061f7c831f533bd91233\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:45.542 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[a39963a1-cf92-4bc5-9e0d-8840cf463f12]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:45.542 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:25:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:45.543 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[55e9f14c-4229-4fc8-be0f-bc098442cce0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:45.543 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56ad396c-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:25:45 compute-0 kernel: tap56ad396c-40: left promiscuous mode
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:45.563 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[b2a45f18-601a-463e-9b17-ec60f9522cd3]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:45.597 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[6b8e70a8-8776-45db-801e-b4d83cd26adb]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:45.598 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[44048780-1256-4cf6-a3a1-73f1670fc45d]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:45.617 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[79bc16fa-31b6-4d96-9da7-5ea863ab729a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 198086, 'reachable_time': 23415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 146095, 'error': None, 'target': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:45.619 28777 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 08 16:25:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:25:45.619 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[8b8f1567-9147-4df3-a624-2642ee310d1d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:25:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d56ad396c\x2d4245\x2d4eb9\x2d9237\x2d69e9ea6a760a.mount: Deactivated successfully.
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.979 2 DEBUG nova.virt.libvirt.vif [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,compute_id=2,config_drive='True',created_at=2025-10-08T16:24:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-94197034',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-94197034',id=12,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:24:26Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1820638f7dc1498db1dd11607c4370f2',ramdisk_id='',reservation_id='r-09cggcc0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',clean_attempts='1',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-1649105137',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-1649105137-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:25:33Z,user_data=None,user_id='93b0b144b7494967bce532f29a6a5c53',uuid=4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "994723e7-3afc-41b5-974c-373e8264e392", "address": "fa:16:3e:24:98:32", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994723e7-3a", "ovs_interfaceid": "994723e7-3afc-41b5-974c-373e8264e392", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.980 2 DEBUG nova.network.os_vif_util [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Converting VIF {"id": "994723e7-3afc-41b5-974c-373e8264e392", "address": "fa:16:3e:24:98:32", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap994723e7-3a", "ovs_interfaceid": "994723e7-3afc-41b5-974c-373e8264e392", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.980 2 DEBUG nova.network.os_vif_util [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:98:32,bridge_name='br-int',has_traffic_filtering=True,id=994723e7-3afc-41b5-974c-373e8264e392,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap994723e7-3a') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.981 2 DEBUG os_vif [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:98:32,bridge_name='br-int',has_traffic_filtering=True,id=994723e7-3afc-41b5-974c-373e8264e392,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap994723e7-3a') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.982 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap994723e7-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.985 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=bb40e1ca-e887-429c-8ee4-34ac2a09254d) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.990 2 INFO os_vif [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:98:32,bridge_name='br-int',has_traffic_filtering=True,id=994723e7-3afc-41b5-974c-373e8264e392,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap994723e7-3a')
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.990 2 INFO nova.virt.libvirt.driver [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Deleting instance files /var/lib/nova/instances/4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae_del
Oct 08 16:25:45 compute-0 nova_compute[117413]: 2025-10-08 16:25:45.991 2 INFO nova.virt.libvirt.driver [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Deletion of /var/lib/nova/instances/4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae_del complete
Oct 08 16:25:46 compute-0 nova_compute[117413]: 2025-10-08 16:25:46.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:46 compute-0 nova_compute[117413]: 2025-10-08 16:25:46.503 2 INFO nova.compute.manager [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Took 1.29 seconds to destroy the instance on the hypervisor.
Oct 08 16:25:46 compute-0 nova_compute[117413]: 2025-10-08 16:25:46.504 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:25:46 compute-0 nova_compute[117413]: 2025-10-08 16:25:46.505 2 DEBUG nova.compute.manager [-] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:25:46 compute-0 nova_compute[117413]: 2025-10-08 16:25:46.505 2 DEBUG nova.network.neutron [-] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:25:46 compute-0 nova_compute[117413]: 2025-10-08 16:25:46.505 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:25:46 compute-0 nova_compute[117413]: 2025-10-08 16:25:46.685 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:25:47 compute-0 nova_compute[117413]: 2025-10-08 16:25:47.426 2 DEBUG nova.network.neutron [-] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:25:47 compute-0 nova_compute[117413]: 2025-10-08 16:25:47.542 2 DEBUG nova.compute.manager [req-8fa28b79-5c6c-4431-be64-34605093918a req-f7e520cb-d506-4565-b767-b643b79c5c0a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Received event network-vif-unplugged-994723e7-3afc-41b5-974c-373e8264e392 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:25:47 compute-0 nova_compute[117413]: 2025-10-08 16:25:47.542 2 DEBUG oslo_concurrency.lockutils [req-8fa28b79-5c6c-4431-be64-34605093918a req-f7e520cb-d506-4565-b767-b643b79c5c0a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:25:47 compute-0 nova_compute[117413]: 2025-10-08 16:25:47.543 2 DEBUG oslo_concurrency.lockutils [req-8fa28b79-5c6c-4431-be64-34605093918a req-f7e520cb-d506-4565-b767-b643b79c5c0a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:25:47 compute-0 nova_compute[117413]: 2025-10-08 16:25:47.543 2 DEBUG oslo_concurrency.lockutils [req-8fa28b79-5c6c-4431-be64-34605093918a req-f7e520cb-d506-4565-b767-b643b79c5c0a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:25:47 compute-0 nova_compute[117413]: 2025-10-08 16:25:47.544 2 DEBUG nova.compute.manager [req-8fa28b79-5c6c-4431-be64-34605093918a req-f7e520cb-d506-4565-b767-b643b79c5c0a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] No waiting events found dispatching network-vif-unplugged-994723e7-3afc-41b5-974c-373e8264e392 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:25:47 compute-0 nova_compute[117413]: 2025-10-08 16:25:47.544 2 DEBUG nova.compute.manager [req-8fa28b79-5c6c-4431-be64-34605093918a req-f7e520cb-d506-4565-b767-b643b79c5c0a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Received event network-vif-unplugged-994723e7-3afc-41b5-974c-373e8264e392 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:25:47 compute-0 nova_compute[117413]: 2025-10-08 16:25:47.545 2 DEBUG nova.compute.manager [req-8fa28b79-5c6c-4431-be64-34605093918a req-f7e520cb-d506-4565-b767-b643b79c5c0a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Received event network-vif-deleted-994723e7-3afc-41b5-974c-373e8264e392 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:25:47 compute-0 nova_compute[117413]: 2025-10-08 16:25:47.938 2 INFO nova.compute.manager [-] [instance: 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae] Took 1.43 seconds to deallocate network for instance.
Oct 08 16:25:48 compute-0 nova_compute[117413]: 2025-10-08 16:25:48.463 2 DEBUG oslo_concurrency.lockutils [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:25:48 compute-0 nova_compute[117413]: 2025-10-08 16:25:48.463 2 DEBUG oslo_concurrency.lockutils [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:25:48 compute-0 nova_compute[117413]: 2025-10-08 16:25:48.469 2 DEBUG oslo_concurrency.lockutils [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:25:48 compute-0 nova_compute[117413]: 2025-10-08 16:25:48.503 2 INFO nova.scheduler.client.report [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Deleted allocations for instance 4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae
Oct 08 16:25:49 compute-0 podman[146096]: 2025-10-08 16:25:49.45343121 +0000 UTC m=+0.058958584 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2)
Oct 08 16:25:49 compute-0 nova_compute[117413]: 2025-10-08 16:25:49.530 2 DEBUG oslo_concurrency.lockutils [None req-f8ff3248-9dd1-4b03-8151-8b3b9040e526 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "4aa9e9ce-3631-47d2-92c6-b75c9a6c69ae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.850s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:25:50 compute-0 nova_compute[117413]: 2025-10-08 16:25:50.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:51 compute-0 nova_compute[117413]: 2025-10-08 16:25:51.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:53 compute-0 podman[146116]: 2025-10-08 16:25:53.450966642 +0000 UTC m=+0.055996719 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:25:55 compute-0 nova_compute[117413]: 2025-10-08 16:25:55.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:56 compute-0 nova_compute[117413]: 2025-10-08 16:25:56.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:25:59 compute-0 podman[127881]: time="2025-10-08T16:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:25:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:25:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3027 "" "Go-http-client/1.1"
Oct 08 16:26:00 compute-0 nova_compute[117413]: 2025-10-08 16:26:00.358 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:26:00 compute-0 podman[146135]: 2025-10-08 16:26:00.531973888 +0000 UTC m=+0.117358539 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:26:00 compute-0 podman[146136]: 2025-10-08 16:26:00.547135083 +0000 UTC m=+0.125971916 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:26:00 compute-0 nova_compute[117413]: 2025-10-08 16:26:00.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:01 compute-0 nova_compute[117413]: 2025-10-08 16:26:01.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:01 compute-0 nova_compute[117413]: 2025-10-08 16:26:01.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:26:01 compute-0 nova_compute[117413]: 2025-10-08 16:26:01.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:26:01 compute-0 openstack_network_exporter[130039]: ERROR   16:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:26:01 compute-0 openstack_network_exporter[130039]: ERROR   16:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:26:01 compute-0 openstack_network_exporter[130039]: ERROR   16:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:26:01 compute-0 openstack_network_exporter[130039]: ERROR   16:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:26:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:26:01 compute-0 openstack_network_exporter[130039]: ERROR   16:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:26:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:26:01 compute-0 nova_compute[117413]: 2025-10-08 16:26:01.876 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:26:01 compute-0 nova_compute[117413]: 2025-10-08 16:26:01.876 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:26:01 compute-0 nova_compute[117413]: 2025-10-08 16:26:01.876 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:26:01 compute-0 nova_compute[117413]: 2025-10-08 16:26:01.877 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:26:02 compute-0 nova_compute[117413]: 2025-10-08 16:26:02.024 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:26:02 compute-0 nova_compute[117413]: 2025-10-08 16:26:02.025 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:26:02 compute-0 nova_compute[117413]: 2025-10-08 16:26:02.042 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.017s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:26:02 compute-0 nova_compute[117413]: 2025-10-08 16:26:02.042 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6183MB free_disk=73.25475311279297GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:26:02 compute-0 nova_compute[117413]: 2025-10-08 16:26:02.043 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:26:02 compute-0 nova_compute[117413]: 2025-10-08 16:26:02.043 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:26:03 compute-0 nova_compute[117413]: 2025-10-08 16:26:03.081 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:26:03 compute-0 nova_compute[117413]: 2025-10-08 16:26:03.082 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:26:02 up 34 min,  0 user,  load average: 0.11, 0.18, 0.24\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:26:03 compute-0 nova_compute[117413]: 2025-10-08 16:26:03.097 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:26:03 compute-0 nova_compute[117413]: 2025-10-08 16:26:03.604 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:26:04 compute-0 nova_compute[117413]: 2025-10-08 16:26:04.133 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:26:04 compute-0 nova_compute[117413]: 2025-10-08 16:26:04.135 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.092s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:26:05 compute-0 nova_compute[117413]: 2025-10-08 16:26:05.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:06 compute-0 nova_compute[117413]: 2025-10-08 16:26:06.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:06 compute-0 nova_compute[117413]: 2025-10-08 16:26:06.135 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:26:06 compute-0 nova_compute[117413]: 2025-10-08 16:26:06.136 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:26:06 compute-0 nova_compute[117413]: 2025-10-08 16:26:06.136 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:26:06 compute-0 nova_compute[117413]: 2025-10-08 16:26:06.136 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:26:06 compute-0 nova_compute[117413]: 2025-10-08 16:26:06.136 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:26:08 compute-0 nova_compute[117413]: 2025-10-08 16:26:08.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:26:09 compute-0 podman[146188]: 2025-10-08 16:26:09.443702667 +0000 UTC m=+0.051652343 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 08 16:26:10 compute-0 nova_compute[117413]: 2025-10-08 16:26:10.357 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:26:10 compute-0 nova_compute[117413]: 2025-10-08 16:26:10.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:11 compute-0 nova_compute[117413]: 2025-10-08 16:26:11.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:14 compute-0 podman[146208]: 2025-10-08 16:26:14.44659727 +0000 UTC m=+0.057152602 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, architecture=x86_64, distribution-scope=public, name=ubi9-minimal, io.openshift.tags=minimal rhel9, config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, managed_by=edpm_ansible)
Oct 08 16:26:15 compute-0 nova_compute[117413]: 2025-10-08 16:26:15.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:16 compute-0 nova_compute[117413]: 2025-10-08 16:26:16.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:17 compute-0 nova_compute[117413]: 2025-10-08 16:26:17.959 2 DEBUG oslo_concurrency.lockutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "7bf9717b-884a-4c47-a0d2-3d00ce297727" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:26:17 compute-0 nova_compute[117413]: 2025-10-08 16:26:17.959 2 DEBUG oslo_concurrency.lockutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "7bf9717b-884a-4c47-a0d2-3d00ce297727" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:26:18 compute-0 nova_compute[117413]: 2025-10-08 16:26:18.467 2 DEBUG nova.compute.manager [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 08 16:26:19 compute-0 nova_compute[117413]: 2025-10-08 16:26:19.038 2 DEBUG oslo_concurrency.lockutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:26:19 compute-0 nova_compute[117413]: 2025-10-08 16:26:19.039 2 DEBUG oslo_concurrency.lockutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:26:19 compute-0 nova_compute[117413]: 2025-10-08 16:26:19.047 2 DEBUG nova.virt.hardware [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 08 16:26:19 compute-0 nova_compute[117413]: 2025-10-08 16:26:19.047 2 INFO nova.compute.claims [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Claim successful on node compute-0.ctlplane.example.com
Oct 08 16:26:20 compute-0 nova_compute[117413]: 2025-10-08 16:26:20.097 2 DEBUG nova.compute.provider_tree [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:26:20 compute-0 podman[146231]: 2025-10-08 16:26:20.450664171 +0000 UTC m=+0.057008687 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 08 16:26:20 compute-0 nova_compute[117413]: 2025-10-08 16:26:20.604 2 DEBUG nova.scheduler.client.report [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:26:21 compute-0 nova_compute[117413]: 2025-10-08 16:26:20.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:21 compute-0 nova_compute[117413]: 2025-10-08 16:26:21.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:21 compute-0 nova_compute[117413]: 2025-10-08 16:26:21.113 2 DEBUG oslo_concurrency.lockutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.074s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:26:21 compute-0 nova_compute[117413]: 2025-10-08 16:26:21.113 2 DEBUG nova.compute.manager [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 08 16:26:21 compute-0 nova_compute[117413]: 2025-10-08 16:26:21.627 2 DEBUG nova.compute.manager [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 08 16:26:21 compute-0 nova_compute[117413]: 2025-10-08 16:26:21.628 2 DEBUG nova.network.neutron [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 08 16:26:21 compute-0 nova_compute[117413]: 2025-10-08 16:26:21.628 2 WARNING neutronclient.v2_0.client [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:26:21 compute-0 nova_compute[117413]: 2025-10-08 16:26:21.629 2 WARNING neutronclient.v2_0.client [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:26:22 compute-0 nova_compute[117413]: 2025-10-08 16:26:22.137 2 INFO nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 08 16:26:22 compute-0 nova_compute[117413]: 2025-10-08 16:26:22.407 2 DEBUG nova.network.neutron [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Successfully created port: 5e9886a9-99b8-42cb-9de2-4102298b8e9e _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 08 16:26:22 compute-0 nova_compute[117413]: 2025-10-08 16:26:22.652 2 DEBUG nova.compute.manager [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.433 2 DEBUG nova.network.neutron [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Successfully updated port: 5e9886a9-99b8-42cb-9de2-4102298b8e9e _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.485 2 DEBUG nova.compute.manager [req-9c80f7bb-e4d9-4ceb-b17f-5fa778d64fae req-9347e928-4ccc-41c3-bfeb-e99aee72a09a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Received event network-changed-5e9886a9-99b8-42cb-9de2-4102298b8e9e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.486 2 DEBUG nova.compute.manager [req-9c80f7bb-e4d9-4ceb-b17f-5fa778d64fae req-9347e928-4ccc-41c3-bfeb-e99aee72a09a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Refreshing instance network info cache due to event network-changed-5e9886a9-99b8-42cb-9de2-4102298b8e9e. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.486 2 DEBUG oslo_concurrency.lockutils [req-9c80f7bb-e4d9-4ceb-b17f-5fa778d64fae req-9347e928-4ccc-41c3-bfeb-e99aee72a09a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-7bf9717b-884a-4c47-a0d2-3d00ce297727" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.486 2 DEBUG oslo_concurrency.lockutils [req-9c80f7bb-e4d9-4ceb-b17f-5fa778d64fae req-9347e928-4ccc-41c3-bfeb-e99aee72a09a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-7bf9717b-884a-4c47-a0d2-3d00ce297727" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.486 2 DEBUG nova.network.neutron [req-9c80f7bb-e4d9-4ceb-b17f-5fa778d64fae req-9347e928-4ccc-41c3-bfeb-e99aee72a09a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Refreshing network info cache for port 5e9886a9-99b8-42cb-9de2-4102298b8e9e _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.676 2 DEBUG nova.compute.manager [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.678 2 DEBUG nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.678 2 INFO nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Creating image(s)
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.678 2 DEBUG oslo_concurrency.lockutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "/var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.679 2 DEBUG oslo_concurrency.lockutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "/var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.679 2 DEBUG oslo_concurrency.lockutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "/var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.680 2 DEBUG oslo_utils.imageutils.format_inspector [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.682 2 DEBUG oslo_utils.imageutils.format_inspector [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.684 2 DEBUG oslo_concurrency.processutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.735 2 DEBUG oslo_concurrency.processutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.736 2 DEBUG oslo_concurrency.lockutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.737 2 DEBUG oslo_concurrency.lockutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.737 2 DEBUG oslo_utils.imageutils.format_inspector [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.740 2 DEBUG oslo_utils.imageutils.format_inspector [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.740 2 DEBUG oslo_concurrency.processutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.802 2 DEBUG oslo_concurrency.processutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.803 2 DEBUG oslo_concurrency.processutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.940 2 DEBUG oslo_concurrency.lockutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "refresh_cache-7bf9717b-884a-4c47-a0d2-3d00ce297727" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:26:23 compute-0 nova_compute[117413]: 2025-10-08 16:26:23.991 2 WARNING neutronclient.v2_0.client [req-9c80f7bb-e4d9-4ceb-b17f-5fa778d64fae req-9347e928-4ccc-41c3-bfeb-e99aee72a09a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:26:24 compute-0 nova_compute[117413]: 2025-10-08 16:26:24.070 2 DEBUG nova.network.neutron [req-9c80f7bb-e4d9-4ceb-b17f-5fa778d64fae req-9347e928-4ccc-41c3-bfeb-e99aee72a09a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:26:24 compute-0 nova_compute[117413]: 2025-10-08 16:26:24.170 2 DEBUG oslo_concurrency.processutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727/disk 1073741824" returned: 0 in 0.368s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:26:24 compute-0 nova_compute[117413]: 2025-10-08 16:26:24.171 2 DEBUG oslo_concurrency.lockutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.434s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:26:24 compute-0 nova_compute[117413]: 2025-10-08 16:26:24.172 2 DEBUG oslo_concurrency.processutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:26:24 compute-0 nova_compute[117413]: 2025-10-08 16:26:24.208 2 DEBUG nova.network.neutron [req-9c80f7bb-e4d9-4ceb-b17f-5fa778d64fae req-9347e928-4ccc-41c3-bfeb-e99aee72a09a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:26:24 compute-0 nova_compute[117413]: 2025-10-08 16:26:24.232 2 DEBUG oslo_concurrency.processutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:26:24 compute-0 nova_compute[117413]: 2025-10-08 16:26:24.233 2 DEBUG nova.virt.disk.api [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Checking if we can resize image /var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:26:24 compute-0 nova_compute[117413]: 2025-10-08 16:26:24.234 2 DEBUG oslo_concurrency.processutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:26:24 compute-0 nova_compute[117413]: 2025-10-08 16:26:24.292 2 DEBUG oslo_concurrency.processutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:26:24 compute-0 nova_compute[117413]: 2025-10-08 16:26:24.293 2 DEBUG nova.virt.disk.api [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Cannot resize image /var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:26:24 compute-0 nova_compute[117413]: 2025-10-08 16:26:24.294 2 DEBUG nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 08 16:26:24 compute-0 nova_compute[117413]: 2025-10-08 16:26:24.294 2 DEBUG nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Ensure instance console log exists: /var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 08 16:26:24 compute-0 nova_compute[117413]: 2025-10-08 16:26:24.295 2 DEBUG oslo_concurrency.lockutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:26:24 compute-0 nova_compute[117413]: 2025-10-08 16:26:24.295 2 DEBUG oslo_concurrency.lockutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:26:24 compute-0 nova_compute[117413]: 2025-10-08 16:26:24.295 2 DEBUG oslo_concurrency.lockutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:26:24 compute-0 podman[146267]: 2025-10-08 16:26:24.441948215 +0000 UTC m=+0.052514369 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 08 16:26:24 compute-0 nova_compute[117413]: 2025-10-08 16:26:24.715 2 DEBUG oslo_concurrency.lockutils [req-9c80f7bb-e4d9-4ceb-b17f-5fa778d64fae req-9347e928-4ccc-41c3-bfeb-e99aee72a09a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-7bf9717b-884a-4c47-a0d2-3d00ce297727" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:26:24 compute-0 nova_compute[117413]: 2025-10-08 16:26:24.716 2 DEBUG oslo_concurrency.lockutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquired lock "refresh_cache-7bf9717b-884a-4c47-a0d2-3d00ce297727" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:26:24 compute-0 nova_compute[117413]: 2025-10-08 16:26:24.716 2 DEBUG nova.network.neutron [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:26:25 compute-0 nova_compute[117413]: 2025-10-08 16:26:25.944 2 DEBUG nova.network.neutron [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:26:26 compute-0 nova_compute[117413]: 2025-10-08 16:26:26.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:26 compute-0 nova_compute[117413]: 2025-10-08 16:26:26.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:26 compute-0 nova_compute[117413]: 2025-10-08 16:26:26.152 2 WARNING neutronclient.v2_0.client [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:26:26 compute-0 nova_compute[117413]: 2025-10-08 16:26:26.499 2 DEBUG nova.network.neutron [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Updating instance_info_cache with network_info: [{"id": "5e9886a9-99b8-42cb-9de2-4102298b8e9e", "address": "fa:16:3e:27:a3:d8", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e9886a9-99", "ovs_interfaceid": "5e9886a9-99b8-42cb-9de2-4102298b8e9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.006 2 DEBUG oslo_concurrency.lockutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Releasing lock "refresh_cache-7bf9717b-884a-4c47-a0d2-3d00ce297727" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.007 2 DEBUG nova.compute.manager [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Instance network_info: |[{"id": "5e9886a9-99b8-42cb-9de2-4102298b8e9e", "address": "fa:16:3e:27:a3:d8", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e9886a9-99", "ovs_interfaceid": "5e9886a9-99b8-42cb-9de2-4102298b8e9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.009 2 DEBUG nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Start _get_guest_xml network_info=[{"id": "5e9886a9-99b8-42cb-9de2-4102298b8e9e", "address": "fa:16:3e:27:a3:d8", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e9886a9-99", "ovs_interfaceid": "5e9886a9-99b8-42cb-9de2-4102298b8e9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'boot_index': 0, 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': '44390e9d-4b05-4916-9ba9-97b19c79ef43'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.014 2 WARNING nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.016 2 DEBUG nova.virt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='44390e9d-4b05-4916-9ba9-97b19c79ef43', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteHostMaintenanceStrategy-server-1858845142', uuid='7bf9717b-884a-4c47-a0d2-3d00ce297727'), owner=OwnerMeta(userid='93b0b144b7494967bce532f29a6a5c53', username='tempest-TestExecuteHostMaintenanceStrategy-1649105137-project-admin', projectid='1820638f7dc1498db1dd11607c4370f2', projectname='tempest-TestExecuteHostMaintenanceStrategy-1649105137'), image=ImageMeta(id='44390e9d-4b05-4916-9ba9-97b19c79ef43', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='43cd5d45-bd07-4889-a671-dd23291090c1', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "5e9886a9-99b8-42cb-9de2-4102298b8e9e", "address": "fa:16:3e:27:a3:d8", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e9886a9-99", "ovs_interfaceid": "5e9886a9-99b8-42cb-9de2-4102298b8e9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008114656.23cad1d.el10', creation_time=1759940787.0161152) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.021 2 DEBUG nova.virt.libvirt.host [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.021 2 DEBUG nova.virt.libvirt.host [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.025 2 DEBUG nova.virt.libvirt.host [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.026 2 DEBUG nova.virt.libvirt.host [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.027 2 DEBUG nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.028 2 DEBUG nova.virt.hardware [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-08T16:08:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='43cd5d45-bd07-4889-a671-dd23291090c1',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.028 2 DEBUG nova.virt.hardware [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.029 2 DEBUG nova.virt.hardware [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.029 2 DEBUG nova.virt.hardware [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.030 2 DEBUG nova.virt.hardware [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.030 2 DEBUG nova.virt.hardware [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.031 2 DEBUG nova.virt.hardware [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.031 2 DEBUG nova.virt.hardware [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.032 2 DEBUG nova.virt.hardware [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.032 2 DEBUG nova.virt.hardware [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.033 2 DEBUG nova.virt.hardware [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.037 2 DEBUG nova.virt.libvirt.vif [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:26:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-1858845142',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-1858845142',id=15,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1820638f7dc1498db1dd11607c4370f2',ramdisk_id='',reservation_id='r-lp60osjc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-1649105137',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-1649105137-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:26:22Z,user_data=None,user_id='93b0b144b7494967bce532f29a6a5c53',uuid=7bf9717b-884a-4c47-a0d2-3d00ce297727,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e9886a9-99b8-42cb-9de2-4102298b8e9e", "address": "fa:16:3e:27:a3:d8", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e9886a9-99", "ovs_interfaceid": "5e9886a9-99b8-42cb-9de2-4102298b8e9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.038 2 DEBUG nova.network.os_vif_util [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Converting VIF {"id": "5e9886a9-99b8-42cb-9de2-4102298b8e9e", "address": "fa:16:3e:27:a3:d8", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e9886a9-99", "ovs_interfaceid": "5e9886a9-99b8-42cb-9de2-4102298b8e9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.038 2 DEBUG nova.network.os_vif_util [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:a3:d8,bridge_name='br-int',has_traffic_filtering=True,id=5e9886a9-99b8-42cb-9de2-4102298b8e9e,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e9886a9-99') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.039 2 DEBUG nova.objects.instance [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7bf9717b-884a-4c47-a0d2-3d00ce297727 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.546 2 DEBUG nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] End _get_guest_xml xml=<domain type="kvm">
Oct 08 16:26:27 compute-0 nova_compute[117413]:   <uuid>7bf9717b-884a-4c47-a0d2-3d00ce297727</uuid>
Oct 08 16:26:27 compute-0 nova_compute[117413]:   <name>instance-0000000f</name>
Oct 08 16:26:27 compute-0 nova_compute[117413]:   <memory>131072</memory>
Oct 08 16:26:27 compute-0 nova_compute[117413]:   <vcpu>1</vcpu>
Oct 08 16:26:27 compute-0 nova_compute[117413]:   <metadata>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <nova:package version="32.1.0-0.20251008114656.23cad1d.el10"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <nova:name>tempest-TestExecuteHostMaintenanceStrategy-server-1858845142</nova:name>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <nova:creationTime>2025-10-08 16:26:27</nova:creationTime>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <nova:flavor name="m1.nano" id="43cd5d45-bd07-4889-a671-dd23291090c1">
Oct 08 16:26:27 compute-0 nova_compute[117413]:         <nova:memory>128</nova:memory>
Oct 08 16:26:27 compute-0 nova_compute[117413]:         <nova:disk>1</nova:disk>
Oct 08 16:26:27 compute-0 nova_compute[117413]:         <nova:swap>0</nova:swap>
Oct 08 16:26:27 compute-0 nova_compute[117413]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 16:26:27 compute-0 nova_compute[117413]:         <nova:vcpus>1</nova:vcpus>
Oct 08 16:26:27 compute-0 nova_compute[117413]:         <nova:extraSpecs>
Oct 08 16:26:27 compute-0 nova_compute[117413]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 08 16:26:27 compute-0 nova_compute[117413]:         </nova:extraSpecs>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       </nova:flavor>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <nova:image uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43">
Oct 08 16:26:27 compute-0 nova_compute[117413]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 08 16:26:27 compute-0 nova_compute[117413]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 08 16:26:27 compute-0 nova_compute[117413]:         <nova:minDisk>1</nova:minDisk>
Oct 08 16:26:27 compute-0 nova_compute[117413]:         <nova:minRam>0</nova:minRam>
Oct 08 16:26:27 compute-0 nova_compute[117413]:         <nova:properties>
Oct 08 16:26:27 compute-0 nova_compute[117413]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 08 16:26:27 compute-0 nova_compute[117413]:         </nova:properties>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       </nova:image>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <nova:owner>
Oct 08 16:26:27 compute-0 nova_compute[117413]:         <nova:user uuid="93b0b144b7494967bce532f29a6a5c53">tempest-TestExecuteHostMaintenanceStrategy-1649105137-project-admin</nova:user>
Oct 08 16:26:27 compute-0 nova_compute[117413]:         <nova:project uuid="1820638f7dc1498db1dd11607c4370f2">tempest-TestExecuteHostMaintenanceStrategy-1649105137</nova:project>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       </nova:owner>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <nova:root type="image" uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <nova:ports>
Oct 08 16:26:27 compute-0 nova_compute[117413]:         <nova:port uuid="5e9886a9-99b8-42cb-9de2-4102298b8e9e">
Oct 08 16:26:27 compute-0 nova_compute[117413]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:         </nova:port>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       </nova:ports>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     </nova:instance>
Oct 08 16:26:27 compute-0 nova_compute[117413]:   </metadata>
Oct 08 16:26:27 compute-0 nova_compute[117413]:   <sysinfo type="smbios">
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <system>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <entry name="manufacturer">RDO</entry>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <entry name="product">OpenStack Compute</entry>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <entry name="version">32.1.0-0.20251008114656.23cad1d.el10</entry>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <entry name="serial">7bf9717b-884a-4c47-a0d2-3d00ce297727</entry>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <entry name="uuid">7bf9717b-884a-4c47-a0d2-3d00ce297727</entry>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <entry name="family">Virtual Machine</entry>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     </system>
Oct 08 16:26:27 compute-0 nova_compute[117413]:   </sysinfo>
Oct 08 16:26:27 compute-0 nova_compute[117413]:   <os>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <boot dev="hd"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <smbios mode="sysinfo"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:   </os>
Oct 08 16:26:27 compute-0 nova_compute[117413]:   <features>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <acpi/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <apic/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <vmcoreinfo/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:   </features>
Oct 08 16:26:27 compute-0 nova_compute[117413]:   <clock offset="utc">
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <timer name="hpet" present="no"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:   </clock>
Oct 08 16:26:27 compute-0 nova_compute[117413]:   <cpu mode="host-model" match="exact">
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <topology sockets="1" cores="1" threads="1"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:26:27 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <disk type="file" device="disk">
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727/disk"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <target dev="vda" bus="virtio"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <disk type="file" device="cdrom">
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <driver name="qemu" type="raw" cache="none"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727/disk.config"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <target dev="sda" bus="sata"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <interface type="ethernet">
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <mac address="fa:16:3e:27:a3:d8"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <driver name="vhost" rx_queue_size="512"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <mtu size="1442"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <target dev="tap5e9886a9-99"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     </interface>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <serial type="pty">
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727/console.log" append="off"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     </serial>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <video>
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     </video>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <input type="tablet" bus="usb"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <rng model="virtio">
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <backend model="random">/dev/urandom</backend>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <controller type="usb" index="0"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 08 16:26:27 compute-0 nova_compute[117413]:       <stats period="10"/>
Oct 08 16:26:27 compute-0 nova_compute[117413]:     </memballoon>
Oct 08 16:26:27 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:26:27 compute-0 nova_compute[117413]: </domain>
Oct 08 16:26:27 compute-0 nova_compute[117413]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.548 2 DEBUG nova.compute.manager [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Preparing to wait for external event network-vif-plugged-5e9886a9-99b8-42cb-9de2-4102298b8e9e prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.548 2 DEBUG oslo_concurrency.lockutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "7bf9717b-884a-4c47-a0d2-3d00ce297727-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.548 2 DEBUG oslo_concurrency.lockutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "7bf9717b-884a-4c47-a0d2-3d00ce297727-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.549 2 DEBUG oslo_concurrency.lockutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "7bf9717b-884a-4c47-a0d2-3d00ce297727-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.550 2 DEBUG nova.virt.libvirt.vif [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:26:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-1858845142',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-1858845142',id=15,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1820638f7dc1498db1dd11607c4370f2',ramdisk_id='',reservation_id='r-lp60osjc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-1649105137',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-1649105137-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:26:22Z,user_data=None,user_id='93b0b144b7494967bce532f29a6a5c53',uuid=7bf9717b-884a-4c47-a0d2-3d00ce297727,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e9886a9-99b8-42cb-9de2-4102298b8e9e", "address": "fa:16:3e:27:a3:d8", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e9886a9-99", "ovs_interfaceid": "5e9886a9-99b8-42cb-9de2-4102298b8e9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.550 2 DEBUG nova.network.os_vif_util [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Converting VIF {"id": "5e9886a9-99b8-42cb-9de2-4102298b8e9e", "address": "fa:16:3e:27:a3:d8", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e9886a9-99", "ovs_interfaceid": "5e9886a9-99b8-42cb-9de2-4102298b8e9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.550 2 DEBUG nova.network.os_vif_util [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:a3:d8,bridge_name='br-int',has_traffic_filtering=True,id=5e9886a9-99b8-42cb-9de2-4102298b8e9e,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e9886a9-99') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.551 2 DEBUG os_vif [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:a3:d8,bridge_name='br-int',has_traffic_filtering=True,id=5e9886a9-99b8-42cb-9de2-4102298b8e9e,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e9886a9-99') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.551 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.552 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.552 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '9dfda060-2ac3-5ea1-8a2f-f9ce3b4a7c7a', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.596 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e9886a9-99, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.597 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap5e9886a9-99, col_values=(('qos', UUID('cefc902c-0464-4c64-b59a-e82ec17d9907')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.597 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap5e9886a9-99, col_values=(('external_ids', {'iface-id': '5e9886a9-99b8-42cb-9de2-4102298b8e9e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:a3:d8', 'vm-uuid': '7bf9717b-884a-4c47-a0d2-3d00ce297727'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:27 compute-0 NetworkManager[1034]: <info>  [1759940787.5992] manager: (tap5e9886a9-99): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:27 compute-0 nova_compute[117413]: 2025-10-08 16:26:27.606 2 INFO os_vif [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:a3:d8,bridge_name='br-int',has_traffic_filtering=True,id=5e9886a9-99b8-42cb-9de2-4102298b8e9e,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e9886a9-99')
Oct 08 16:26:29 compute-0 nova_compute[117413]: 2025-10-08 16:26:29.146 2 DEBUG nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:26:29 compute-0 nova_compute[117413]: 2025-10-08 16:26:29.147 2 DEBUG nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:26:29 compute-0 nova_compute[117413]: 2025-10-08 16:26:29.147 2 DEBUG nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] No VIF found with MAC fa:16:3e:27:a3:d8, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 08 16:26:29 compute-0 nova_compute[117413]: 2025-10-08 16:26:29.147 2 INFO nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Using config drive
Oct 08 16:26:29 compute-0 nova_compute[117413]: 2025-10-08 16:26:29.657 2 WARNING neutronclient.v2_0.client [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:26:29 compute-0 podman[127881]: time="2025-10-08T16:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:26:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:26:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3030 "" "Go-http-client/1.1"
Oct 08 16:26:30 compute-0 nova_compute[117413]: 2025-10-08 16:26:30.002 2 INFO nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Creating config drive at /var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727/disk.config
Oct 08 16:26:30 compute-0 nova_compute[117413]: 2025-10-08 16:26:30.009 2 DEBUG oslo_concurrency.processutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmp0cpblfq6 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:26:30 compute-0 nova_compute[117413]: 2025-10-08 16:26:30.158 2 DEBUG oslo_concurrency.processutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmp0cpblfq6" returned: 0 in 0.149s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:26:30 compute-0 kernel: tap5e9886a9-99: entered promiscuous mode
Oct 08 16:26:30 compute-0 ovn_controller[19768]: 2025-10-08T16:26:30Z|00116|binding|INFO|Claiming lport 5e9886a9-99b8-42cb-9de2-4102298b8e9e for this chassis.
Oct 08 16:26:30 compute-0 ovn_controller[19768]: 2025-10-08T16:26:30Z|00117|binding|INFO|5e9886a9-99b8-42cb-9de2-4102298b8e9e: Claiming fa:16:3e:27:a3:d8 10.100.0.4
Oct 08 16:26:30 compute-0 NetworkManager[1034]: <info>  [1759940790.2364] manager: (tap5e9886a9-99): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Oct 08 16:26:30 compute-0 nova_compute[117413]: 2025-10-08 16:26:30.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:30 compute-0 ovn_controller[19768]: 2025-10-08T16:26:30Z|00118|binding|INFO|Setting lport 5e9886a9-99b8-42cb-9de2-4102298b8e9e ovn-installed in OVS
Oct 08 16:26:30 compute-0 nova_compute[117413]: 2025-10-08 16:26:30.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.252 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:a3:d8 10.100.0.4'], port_security=['fa:16:3e:27:a3:d8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7bf9717b-884a-4c47-a0d2-3d00ce297727', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1820638f7dc1498db1dd11607c4370f2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9aaea0fc-afb8-4aa4-827a-c3a5e7706faf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ac4acf5-70fa-4592-a711-3c63ae37ec88, chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=5e9886a9-99b8-42cb-9de2-4102298b8e9e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:26:30 compute-0 ovn_controller[19768]: 2025-10-08T16:26:30Z|00119|binding|INFO|Setting lport 5e9886a9-99b8-42cb-9de2-4102298b8e9e up in Southbound
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.253 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 5e9886a9-99b8-42cb-9de2-4102298b8e9e in datapath 56ad396c-4245-4eb9-9237-69e9ea6a760a bound to our chassis
Oct 08 16:26:30 compute-0 nova_compute[117413]: 2025-10-08 16:26:30.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.255 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 56ad396c-4245-4eb9-9237-69e9ea6a760a
Oct 08 16:26:30 compute-0 systemd-machined[77548]: New machine qemu-10-instance-0000000f.
Oct 08 16:26:30 compute-0 systemd-udevd[146310]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.275 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb92338-f98b-4e51-ba6a-f62106d5cb7b]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.276 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap56ad396c-41 in ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.278 139805 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap56ad396c-40 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.278 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[051c69e9-b6a1-4496-bc80-a5cea1a38f07]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.280 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a7d07a-b751-4ac8-8c50-726ef162d261]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:26:30 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000f.
Oct 08 16:26:30 compute-0 NetworkManager[1034]: <info>  [1759940790.2947] device (tap5e9886a9-99): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:26:30 compute-0 NetworkManager[1034]: <info>  [1759940790.2961] device (tap5e9886a9-99): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.296 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[169a5df7-938e-4017-b33e-2de67a1d0899]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.315 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[39cab8a5-9589-462d-a7c8-cc9b69531382]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.349 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[5f4601d8-afa8-48aa-b52b-f7124f166552]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.353 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[5280303f-3602-46e6-8639-22e645e6f285]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:26:30 compute-0 NetworkManager[1034]: <info>  [1759940790.3550] manager: (tap56ad396c-40): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Oct 08 16:26:30 compute-0 nova_compute[117413]: 2025-10-08 16:26:30.391 2 DEBUG nova.compute.manager [req-bd8a5935-b467-4d6d-ab48-a68ead1623df req-eb40f3e3-40ad-4607-8a77-3253526dc060 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Received event network-vif-plugged-5e9886a9-99b8-42cb-9de2-4102298b8e9e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:26:30 compute-0 nova_compute[117413]: 2025-10-08 16:26:30.392 2 DEBUG oslo_concurrency.lockutils [req-bd8a5935-b467-4d6d-ab48-a68ead1623df req-eb40f3e3-40ad-4607-8a77-3253526dc060 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "7bf9717b-884a-4c47-a0d2-3d00ce297727-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:26:30 compute-0 nova_compute[117413]: 2025-10-08 16:26:30.392 2 DEBUG oslo_concurrency.lockutils [req-bd8a5935-b467-4d6d-ab48-a68ead1623df req-eb40f3e3-40ad-4607-8a77-3253526dc060 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "7bf9717b-884a-4c47-a0d2-3d00ce297727-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:26:30 compute-0 nova_compute[117413]: 2025-10-08 16:26:30.392 2 DEBUG oslo_concurrency.lockutils [req-bd8a5935-b467-4d6d-ab48-a68ead1623df req-eb40f3e3-40ad-4607-8a77-3253526dc060 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "7bf9717b-884a-4c47-a0d2-3d00ce297727-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:26:30 compute-0 nova_compute[117413]: 2025-10-08 16:26:30.392 2 DEBUG nova.compute.manager [req-bd8a5935-b467-4d6d-ab48-a68ead1623df req-eb40f3e3-40ad-4607-8a77-3253526dc060 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Processing event network-vif-plugged-5e9886a9-99b8-42cb-9de2-4102298b8e9e _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.393 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[e9f282fa-cc1c-4216-af26-22468caef78e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.396 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[ca8093b8-4e49-42dc-bf32-5c341740c942]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:26:30 compute-0 NetworkManager[1034]: <info>  [1759940790.4252] device (tap56ad396c-40): carrier: link connected
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.433 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[fea01b01-2122-4343-8e88-e80fc71705bd]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.464 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[8a715745-f6cd-4cc1-9aba-63efafde8435]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap56ad396c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:36:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 207911, 'reachable_time': 38651, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 146342, 'error': None, 'target': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.487 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[b7942591-f48a-48e6-97a4-e7b92cb4a68a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9d:36b7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 207911, 'tstamp': 207911}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 146343, 'error': None, 'target': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.522 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[9c632f53-99c1-463e-8f54-db617e28363c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap56ad396c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:36:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 207911, 'reachable_time': 38651, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 146344, 'error': None, 'target': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.569 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad1b25c-9aeb-49b0-9e88-4a960e76ca04]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.653 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[9804e14e-4526-4f40-b8e3-9fa6a6e0952f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.654 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56ad396c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.655 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.655 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap56ad396c-40, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:26:30 compute-0 nova_compute[117413]: 2025-10-08 16:26:30.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:30 compute-0 kernel: tap56ad396c-40: entered promiscuous mode
Oct 08 16:26:30 compute-0 NetworkManager[1034]: <info>  [1759940790.6599] manager: (tap56ad396c-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Oct 08 16:26:30 compute-0 nova_compute[117413]: 2025-10-08 16:26:30.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.664 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap56ad396c-40, col_values=(('external_ids', {'iface-id': 'c11878dc-b81c-4cd4-8280-26645e84c0d9'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:26:30 compute-0 ovn_controller[19768]: 2025-10-08T16:26:30Z|00120|binding|INFO|Releasing lport c11878dc-b81c-4cd4-8280-26645e84c0d9 from this chassis (sb_readonly=0)
Oct 08 16:26:30 compute-0 nova_compute[117413]: 2025-10-08 16:26:30.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:30 compute-0 nova_compute[117413]: 2025-10-08 16:26:30.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.694 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[c0faface-88d9-40d9-9afc-c26f7f380d68]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.695 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.695 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.695 28633 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 56ad396c-4245-4eb9-9237-69e9ea6a760a disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.695 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.696 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[4f3ab22a-f7fb-41d4-aa52-3218971eee67]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.697 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.697 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[b80487b0-2856-4308-9e29-1bfc0b687713]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.698 28633 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: global
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     log         /dev/log local0 debug
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     log-tag     haproxy-metadata-proxy-56ad396c-4245-4eb9-9237-69e9ea6a760a
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     user        root
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     group       root
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     maxconn     1024
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     pidfile     /var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     daemon
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: defaults
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     log global
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     mode http
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     option httplog
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     option dontlognull
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     option http-server-close
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     option forwardfor
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     retries                 3
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     timeout http-request    30s
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     timeout connect         30s
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     timeout client          32s
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     timeout server          32s
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     timeout http-keep-alive 30s
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: listen listener
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     bind 169.254.169.254:80
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     server metadata /var/lib/neutron/metadata_proxy
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:     http-request add-header X-OVN-Network-ID 56ad396c-4245-4eb9-9237-69e9ea6a760a
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 08 16:26:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:30.699 28633 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'env', 'PROCESS_TAG=haproxy-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/56ad396c-4245-4eb9-9237-69e9ea6a760a.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 08 16:26:31 compute-0 nova_compute[117413]: 2025-10-08 16:26:31.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:31 compute-0 podman[146383]: 2025-10-08 16:26:31.117559695 +0000 UTC m=+0.058793419 container create 94b9910ccdad87c4bc292b9d57c4377431df1fbcdf361f287bbfe035c55b4b4a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Oct 08 16:26:31 compute-0 systemd[1]: Started libpod-conmon-94b9910ccdad87c4bc292b9d57c4377431df1fbcdf361f287bbfe035c55b4b4a.scope.
Oct 08 16:26:31 compute-0 podman[146383]: 2025-10-08 16:26:31.085228226 +0000 UTC m=+0.026461970 image pull 1b705be0a2473f9551d4f3571c1e8fc1b0bd84e013684239de53078e70a4b6e3 38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 08 16:26:31 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cab9706b89f5ce662fbc618348b7c2f3ae49855d80e1ccbd628e42e4e89d9af/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 08 16:26:31 compute-0 podman[146383]: 2025-10-08 16:26:31.204575712 +0000 UTC m=+0.145809456 container init 94b9910ccdad87c4bc292b9d57c4377431df1fbcdf361f287bbfe035c55b4b4a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 08 16:26:31 compute-0 podman[146383]: 2025-10-08 16:26:31.211947254 +0000 UTC m=+0.153180978 container start 94b9910ccdad87c4bc292b9d57c4377431df1fbcdf361f287bbfe035c55b4b4a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 08 16:26:31 compute-0 neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a[146400]: [NOTICE]   (146436) : New worker (146447) forked
Oct 08 16:26:31 compute-0 neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a[146400]: [NOTICE]   (146436) : Loading success.
Oct 08 16:26:31 compute-0 nova_compute[117413]: 2025-10-08 16:26:31.245 2 DEBUG nova.compute.manager [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 08 16:26:31 compute-0 podman[146395]: 2025-10-08 16:26:31.247654499 +0000 UTC m=+0.090126628 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 08 16:26:31 compute-0 nova_compute[117413]: 2025-10-08 16:26:31.255 2 DEBUG nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 08 16:26:31 compute-0 nova_compute[117413]: 2025-10-08 16:26:31.258 2 INFO nova.virt.libvirt.driver [-] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Instance spawned successfully.
Oct 08 16:26:31 compute-0 nova_compute[117413]: 2025-10-08 16:26:31.259 2 DEBUG nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 08 16:26:31 compute-0 podman[146399]: 2025-10-08 16:26:31.279501053 +0000 UTC m=+0.118094371 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 08 16:26:31 compute-0 openstack_network_exporter[130039]: ERROR   16:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:26:31 compute-0 openstack_network_exporter[130039]: ERROR   16:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:26:31 compute-0 openstack_network_exporter[130039]: ERROR   16:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:26:31 compute-0 openstack_network_exporter[130039]: ERROR   16:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:26:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:26:31 compute-0 openstack_network_exporter[130039]: ERROR   16:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:26:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:26:31 compute-0 nova_compute[117413]: 2025-10-08 16:26:31.771 2 DEBUG nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:26:31 compute-0 nova_compute[117413]: 2025-10-08 16:26:31.772 2 DEBUG nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:26:31 compute-0 nova_compute[117413]: 2025-10-08 16:26:31.772 2 DEBUG nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:26:31 compute-0 nova_compute[117413]: 2025-10-08 16:26:31.772 2 DEBUG nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:26:31 compute-0 nova_compute[117413]: 2025-10-08 16:26:31.773 2 DEBUG nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:26:31 compute-0 nova_compute[117413]: 2025-10-08 16:26:31.773 2 DEBUG nova.virt.libvirt.driver [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:26:32 compute-0 nova_compute[117413]: 2025-10-08 16:26:32.282 2 INFO nova.compute.manager [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Took 8.60 seconds to spawn the instance on the hypervisor.
Oct 08 16:26:32 compute-0 nova_compute[117413]: 2025-10-08 16:26:32.282 2 DEBUG nova.compute.manager [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:26:32 compute-0 nova_compute[117413]: 2025-10-08 16:26:32.440 2 DEBUG nova.compute.manager [req-a2a74de8-9be8-4b93-a342-24b752731360 req-84672942-91ac-4eaa-be16-48b4e906024f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Received event network-vif-plugged-5e9886a9-99b8-42cb-9de2-4102298b8e9e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:26:32 compute-0 nova_compute[117413]: 2025-10-08 16:26:32.440 2 DEBUG oslo_concurrency.lockutils [req-a2a74de8-9be8-4b93-a342-24b752731360 req-84672942-91ac-4eaa-be16-48b4e906024f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "7bf9717b-884a-4c47-a0d2-3d00ce297727-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:26:32 compute-0 nova_compute[117413]: 2025-10-08 16:26:32.441 2 DEBUG oslo_concurrency.lockutils [req-a2a74de8-9be8-4b93-a342-24b752731360 req-84672942-91ac-4eaa-be16-48b4e906024f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "7bf9717b-884a-4c47-a0d2-3d00ce297727-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:26:32 compute-0 nova_compute[117413]: 2025-10-08 16:26:32.441 2 DEBUG oslo_concurrency.lockutils [req-a2a74de8-9be8-4b93-a342-24b752731360 req-84672942-91ac-4eaa-be16-48b4e906024f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "7bf9717b-884a-4c47-a0d2-3d00ce297727-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:26:32 compute-0 nova_compute[117413]: 2025-10-08 16:26:32.441 2 DEBUG nova.compute.manager [req-a2a74de8-9be8-4b93-a342-24b752731360 req-84672942-91ac-4eaa-be16-48b4e906024f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] No waiting events found dispatching network-vif-plugged-5e9886a9-99b8-42cb-9de2-4102298b8e9e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:26:32 compute-0 nova_compute[117413]: 2025-10-08 16:26:32.441 2 WARNING nova.compute.manager [req-a2a74de8-9be8-4b93-a342-24b752731360 req-84672942-91ac-4eaa-be16-48b4e906024f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Received unexpected event network-vif-plugged-5e9886a9-99b8-42cb-9de2-4102298b8e9e for instance with vm_state active and task_state None.
Oct 08 16:26:32 compute-0 nova_compute[117413]: 2025-10-08 16:26:32.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:32 compute-0 nova_compute[117413]: 2025-10-08 16:26:32.827 2 INFO nova.compute.manager [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Took 13.85 seconds to build instance.
Oct 08 16:26:33 compute-0 nova_compute[117413]: 2025-10-08 16:26:33.333 2 DEBUG oslo_concurrency.lockutils [None req-68d542ef-7707-493a-999d-78c05bfe461a 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "7bf9717b-884a-4c47-a0d2-3d00ce297727" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.374s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:26:33 compute-0 sshd-session[146289]: Connection reset by 205.210.31.89 port 60304 [preauth]
Oct 08 16:26:36 compute-0 nova_compute[117413]: 2025-10-08 16:26:36.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:37 compute-0 nova_compute[117413]: 2025-10-08 16:26:37.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:40 compute-0 podman[146462]: 2025-10-08 16:26:40.450655476 +0000 UTC m=+0.054223082 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 16:26:41 compute-0 nova_compute[117413]: 2025-10-08 16:26:41.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:41.903 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:26:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:41.903 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:26:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:41.904 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:26:42 compute-0 nova_compute[117413]: 2025-10-08 16:26:42.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:42 compute-0 ovn_controller[19768]: 2025-10-08T16:26:42Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:27:a3:d8 10.100.0.4
Oct 08 16:26:42 compute-0 ovn_controller[19768]: 2025-10-08T16:26:42Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:27:a3:d8 10.100.0.4
Oct 08 16:26:45 compute-0 podman[146498]: 2025-10-08 16:26:45.446214147 +0000 UTC m=+0.055844658 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, release=1755695350)
Oct 08 16:26:45 compute-0 nova_compute[117413]: 2025-10-08 16:26:45.796 2 DEBUG nova.virt.libvirt.driver [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Creating tmpfile /var/lib/nova/instances/tmpufg6_no2 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10944
Oct 08 16:26:45 compute-0 nova_compute[117413]: 2025-10-08 16:26:45.797 2 WARNING neutronclient.v2_0.client [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:26:45 compute-0 nova_compute[117413]: 2025-10-08 16:26:45.808 2 DEBUG nova.compute.manager [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpufg6_no2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.12/site-packages/nova/compute/manager.py:9086
Oct 08 16:26:46 compute-0 nova_compute[117413]: 2025-10-08 16:26:46.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:47 compute-0 sshd-session[146251]: error: kex_exchange_identification: read: Connection reset by peer
Oct 08 16:26:47 compute-0 sshd-session[146251]: Connection reset by 45.140.17.97 port 13998
Oct 08 16:26:47 compute-0 nova_compute[117413]: 2025-10-08 16:26:47.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:47 compute-0 nova_compute[117413]: 2025-10-08 16:26:47.849 2 WARNING neutronclient.v2_0.client [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:26:51 compute-0 nova_compute[117413]: 2025-10-08 16:26:51.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:51 compute-0 podman[146520]: 2025-10-08 16:26:51.487248447 +0000 UTC m=+0.085361227 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 08 16:26:51 compute-0 nova_compute[117413]: 2025-10-08 16:26:51.830 2 DEBUG nova.compute.manager [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpufg6_no2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='57120e80-d456-4229-84bb-f8ddc2cdbe4c',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9311
Oct 08 16:26:52 compute-0 nova_compute[117413]: 2025-10-08 16:26:52.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:52 compute-0 nova_compute[117413]: 2025-10-08 16:26:52.846 2 DEBUG oslo_concurrency.lockutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-57120e80-d456-4229-84bb-f8ddc2cdbe4c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:26:52 compute-0 nova_compute[117413]: 2025-10-08 16:26:52.846 2 DEBUG oslo_concurrency.lockutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-57120e80-d456-4229-84bb-f8ddc2cdbe4c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:26:52 compute-0 nova_compute[117413]: 2025-10-08 16:26:52.846 2 DEBUG nova.network.neutron [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:26:53 compute-0 nova_compute[117413]: 2025-10-08 16:26:53.353 2 WARNING neutronclient.v2_0.client [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:26:54 compute-0 nova_compute[117413]: 2025-10-08 16:26:54.925 2 WARNING neutronclient.v2_0.client [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:26:55 compute-0 podman[146541]: 2025-10-08 16:26:55.444059671 +0000 UTC m=+0.052052899 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:26:56 compute-0 nova_compute[117413]: 2025-10-08 16:26:56.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:56 compute-0 nova_compute[117413]: 2025-10-08 16:26:56.507 2 DEBUG nova.network.neutron [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Updating instance_info_cache with network_info: [{"id": "544d9024-750c-48e8-83f0-2ce17e7a3048", "address": "fa:16:3e:2e:44:32", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap544d9024-75", "ovs_interfaceid": "544d9024-750c-48e8-83f0-2ce17e7a3048", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.015 2 DEBUG oslo_concurrency.lockutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-57120e80-d456-4229-84bb-f8ddc2cdbe4c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.030 2 DEBUG nova.virt.libvirt.driver [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpufg6_no2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='57120e80-d456-4229-84bb-f8ddc2cdbe4c',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11737
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.031 2 DEBUG nova.virt.libvirt.driver [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Creating instance directory: /var/lib/nova/instances/57120e80-d456-4229-84bb-f8ddc2cdbe4c pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11750
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.031 2 DEBUG nova.virt.libvirt.driver [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Creating disk.info with the contents: {'/var/lib/nova/instances/57120e80-d456-4229-84bb-f8ddc2cdbe4c/disk': 'qcow2', '/var/lib/nova/instances/57120e80-d456-4229-84bb-f8ddc2cdbe4c/disk.config': 'raw'} pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11764
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.032 2 DEBUG nova.virt.libvirt.driver [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11774
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.032 2 DEBUG nova.objects.instance [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 57120e80-d456-4229-84bb-f8ddc2cdbe4c obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.539 2 DEBUG oslo_utils.imageutils.format_inspector [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.542 2 DEBUG oslo_utils.imageutils.format_inspector [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.543 2 DEBUG oslo_concurrency.processutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.593 2 DEBUG oslo_concurrency.processutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.594 2 DEBUG oslo_concurrency.lockutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.594 2 DEBUG oslo_concurrency.lockutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.595 2 DEBUG oslo_utils.imageutils.format_inspector [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.598 2 DEBUG oslo_utils.imageutils.format_inspector [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.598 2 DEBUG oslo_concurrency.processutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.651 2 DEBUG oslo_concurrency.processutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.651 2 DEBUG oslo_concurrency.processutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/57120e80-d456-4229-84bb-f8ddc2cdbe4c/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.689 2 DEBUG oslo_concurrency.processutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/57120e80-d456-4229-84bb-f8ddc2cdbe4c/disk 1073741824" returned: 0 in 0.037s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.690 2 DEBUG oslo_concurrency.lockutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.096s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.691 2 DEBUG oslo_concurrency.processutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.745 2 DEBUG oslo_concurrency.processutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.746 2 DEBUG nova.virt.disk.api [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Checking if we can resize image /var/lib/nova/instances/57120e80-d456-4229-84bb-f8ddc2cdbe4c/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.747 2 DEBUG oslo_concurrency.processutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/57120e80-d456-4229-84bb-f8ddc2cdbe4c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.800 2 DEBUG oslo_concurrency.processutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/57120e80-d456-4229-84bb-f8ddc2cdbe4c/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.802 2 DEBUG nova.virt.disk.api [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Cannot resize image /var/lib/nova/instances/57120e80-d456-4229-84bb-f8ddc2cdbe4c/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:26:57 compute-0 nova_compute[117413]: 2025-10-08 16:26:57.803 2 DEBUG nova.objects.instance [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'migration_context' on Instance uuid 57120e80-d456-4229-84bb-f8ddc2cdbe4c obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.311 2 DEBUG nova.objects.base [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Object Instance<57120e80-d456-4229-84bb-f8ddc2cdbe4c> lazy-loaded attributes: trusted_certs,migration_context wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.312 2 DEBUG oslo_concurrency.processutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/57120e80-d456-4229-84bb-f8ddc2cdbe4c/disk.config 497664 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.344 2 DEBUG oslo_concurrency.processutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/57120e80-d456-4229-84bb-f8ddc2cdbe4c/disk.config 497664" returned: 0 in 0.032s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.345 2 DEBUG nova.virt.libvirt.driver [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11704
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.347 2 DEBUG nova.virt.libvirt.vif [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-08T16:25:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-1981828110',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-1981828110',id=14,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:26:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1820638f7dc1498db1dd11607c4370f2',ramdisk_id='',reservation_id='r-yp0549gf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-1649105137',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-1649105137-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:26:12Z,user_data=None,user_id='93b0b144b7494967bce532f29a6a5c53',uuid=57120e80-d456-4229-84bb-f8ddc2cdbe4c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "544d9024-750c-48e8-83f0-2ce17e7a3048", "address": "fa:16:3e:2e:44:32", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap544d9024-75", "ovs_interfaceid": "544d9024-750c-48e8-83f0-2ce17e7a3048", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.347 2 DEBUG nova.network.os_vif_util [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converting VIF {"id": "544d9024-750c-48e8-83f0-2ce17e7a3048", "address": "fa:16:3e:2e:44:32", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap544d9024-75", "ovs_interfaceid": "544d9024-750c-48e8-83f0-2ce17e7a3048", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.348 2 DEBUG nova.network.os_vif_util [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:44:32,bridge_name='br-int',has_traffic_filtering=True,id=544d9024-750c-48e8-83f0-2ce17e7a3048,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap544d9024-75') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.349 2 DEBUG os_vif [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:44:32,bridge_name='br-int',has_traffic_filtering=True,id=544d9024-750c-48e8-83f0-2ce17e7a3048,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap544d9024-75') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.350 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.351 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.352 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'ad2fd4d1-cf9f-59cc-a042-d46e6bb09969', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.391 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap544d9024-75, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.392 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap544d9024-75, col_values=(('qos', UUID('3b86baaa-9cd7-4100-aa9f-a61967e1fe97')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.392 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap544d9024-75, col_values=(('external_ids', {'iface-id': '544d9024-750c-48e8-83f0-2ce17e7a3048', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:44:32', 'vm-uuid': '57120e80-d456-4229-84bb-f8ddc2cdbe4c'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:58 compute-0 NetworkManager[1034]: <info>  [1759940818.3943] manager: (tap544d9024-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.401 2 INFO os_vif [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:44:32,bridge_name='br-int',has_traffic_filtering=True,id=544d9024-750c-48e8-83f0-2ce17e7a3048,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap544d9024-75')
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.402 2 DEBUG nova.virt.libvirt.driver [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11851
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.402 2 DEBUG nova.compute.manager [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpufg6_no2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='57120e80-d456-4229-84bb-f8ddc2cdbe4c',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9377
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.403 2 WARNING neutronclient.v2_0.client [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:26:58 compute-0 nova_compute[117413]: 2025-10-08 16:26:58.941 2 WARNING neutronclient.v2_0.client [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:26:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:59.198 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:26:59 compute-0 nova_compute[117413]: 2025-10-08 16:26:59.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:26:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:26:59.199 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:26:59 compute-0 podman[127881]: time="2025-10-08T16:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:26:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:26:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3494 "" "Go-http-client/1.1"
Oct 08 16:27:00 compute-0 nova_compute[117413]: 2025-10-08 16:27:00.087 2 DEBUG nova.network.neutron [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Port 544d9024-750c-48e8-83f0-2ce17e7a3048 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.12/site-packages/nova/network/neutron.py:356
Oct 08 16:27:00 compute-0 nova_compute[117413]: 2025-10-08 16:27:00.098 2 DEBUG nova.compute.manager [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpufg6_no2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='57120e80-d456-4229-84bb-f8ddc2cdbe4c',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9443
Oct 08 16:27:00 compute-0 ovn_controller[19768]: 2025-10-08T16:27:00Z|00121|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 08 16:27:01 compute-0 nova_compute[117413]: 2025-10-08 16:27:01.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:01 compute-0 openstack_network_exporter[130039]: ERROR   16:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:27:01 compute-0 openstack_network_exporter[130039]: ERROR   16:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:27:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:27:01 compute-0 openstack_network_exporter[130039]: ERROR   16:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:27:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:27:01 compute-0 openstack_network_exporter[130039]: ERROR   16:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:27:01 compute-0 openstack_network_exporter[130039]: ERROR   16:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:27:01 compute-0 podman[146581]: 2025-10-08 16:27:01.462648484 +0000 UTC m=+0.062669115 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 08 16:27:01 compute-0 podman[146582]: 2025-10-08 16:27:01.494504941 +0000 UTC m=+0.090654961 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.4, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 08 16:27:02 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:02.201 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:27:02 compute-0 nova_compute[117413]: 2025-10-08 16:27:02.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:27:02 compute-0 nova_compute[117413]: 2025-10-08 16:27:02.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:27:02 compute-0 kernel: tap544d9024-75: entered promiscuous mode
Oct 08 16:27:02 compute-0 NetworkManager[1034]: <info>  [1759940822.9522] manager: (tap544d9024-75): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Oct 08 16:27:02 compute-0 ovn_controller[19768]: 2025-10-08T16:27:02Z|00122|binding|INFO|Claiming lport 544d9024-750c-48e8-83f0-2ce17e7a3048 for this additional chassis.
Oct 08 16:27:02 compute-0 nova_compute[117413]: 2025-10-08 16:27:02.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:02 compute-0 ovn_controller[19768]: 2025-10-08T16:27:02Z|00123|binding|INFO|544d9024-750c-48e8-83f0-2ce17e7a3048: Claiming fa:16:3e:2e:44:32 10.100.0.10
Oct 08 16:27:02 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:02.967 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:44:32 10.100.0.10'], port_security=['fa:16:3e:2e:44:32 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '57120e80-d456-4229-84bb-f8ddc2cdbe4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1820638f7dc1498db1dd11607c4370f2', 'neutron:revision_number': '10', 'neutron:security_group_ids': '9aaea0fc-afb8-4aa4-827a-c3a5e7706faf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ac4acf5-70fa-4592-a711-3c63ae37ec88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=544d9024-750c-48e8-83f0-2ce17e7a3048) old=Port_Binding(additional_chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:27:02 compute-0 ovn_controller[19768]: 2025-10-08T16:27:02Z|00124|binding|INFO|Setting lport 544d9024-750c-48e8-83f0-2ce17e7a3048 ovn-installed in OVS
Oct 08 16:27:02 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:02.968 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 544d9024-750c-48e8-83f0-2ce17e7a3048 in datapath 56ad396c-4245-4eb9-9237-69e9ea6a760a unbound from our chassis
Oct 08 16:27:02 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:02.969 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 56ad396c-4245-4eb9-9237-69e9ea6a760a
Oct 08 16:27:02 compute-0 nova_compute[117413]: 2025-10-08 16:27:02.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:02 compute-0 nova_compute[117413]: 2025-10-08 16:27:02.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:02 compute-0 systemd-udevd[146645]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:27:02 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:02.985 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[3f8a6586-7df9-4008-aa13-1c139a20a4d2]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:02 compute-0 NetworkManager[1034]: <info>  [1759940822.9987] device (tap544d9024-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:27:03 compute-0 NetworkManager[1034]: <info>  [1759940823.0000] device (tap544d9024-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:27:03 compute-0 systemd-machined[77548]: New machine qemu-11-instance-0000000e.
Oct 08 16:27:03 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000e.
Oct 08 16:27:03 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:03.025 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[951f5d5d-460a-4abb-a3c4-5199ce1d3c1e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:03 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:03.030 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[bcbdcace-4550-4552-bff9-fb7635347c3f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:03 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:03.072 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[28cdb9b5-2a8f-437c-9bec-54493173789b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:03 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:03.092 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[1c29c0ea-07ea-43ab-b4f8-28cf92eabc45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap56ad396c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:36:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 207911, 'reachable_time': 33605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 146658, 'error': None, 'target': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:03 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:03.109 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[4892c3a9-9143-4e82-9667-93f34bbf9d5d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap56ad396c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 207929, 'tstamp': 207929}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 146660, 'error': None, 'target': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap56ad396c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 207933, 'tstamp': 207933}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 146660, 'error': None, 'target': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:03 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:03.110 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56ad396c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:27:03 compute-0 nova_compute[117413]: 2025-10-08 16:27:03.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:03 compute-0 nova_compute[117413]: 2025-10-08 16:27:03.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:03 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:03.113 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap56ad396c-40, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:27:03 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:03.114 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:27:03 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:03.114 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap56ad396c-40, col_values=(('external_ids', {'iface-id': 'c11878dc-b81c-4cd4-8280-26645e84c0d9'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:27:03 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:03.114 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:27:03 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:03.115 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[3cdf8153-cc5a-4879-a868-e4350dc5aba4]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-56ad396c-4245-4eb9-9237-69e9ea6a760a\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 56ad396c-4245-4eb9-9237-69e9ea6a760a\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:03 compute-0 nova_compute[117413]: 2025-10-08 16:27:03.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:27:03 compute-0 nova_compute[117413]: 2025-10-08 16:27:03.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:03 compute-0 nova_compute[117413]: 2025-10-08 16:27:03.874 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:27:03 compute-0 nova_compute[117413]: 2025-10-08 16:27:03.874 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:27:03 compute-0 nova_compute[117413]: 2025-10-08 16:27:03.875 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:27:03 compute-0 nova_compute[117413]: 2025-10-08 16:27:03.875 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:27:04 compute-0 nova_compute[117413]: 2025-10-08 16:27:04.922 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/57120e80-d456-4229-84bb-f8ddc2cdbe4c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:27:04 compute-0 nova_compute[117413]: 2025-10-08 16:27:04.977 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/57120e80-d456-4229-84bb-f8ddc2cdbe4c/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:27:04 compute-0 nova_compute[117413]: 2025-10-08 16:27:04.978 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/57120e80-d456-4229-84bb-f8ddc2cdbe4c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:27:05 compute-0 nova_compute[117413]: 2025-10-08 16:27:05.032 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/57120e80-d456-4229-84bb-f8ddc2cdbe4c/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:27:05 compute-0 nova_compute[117413]: 2025-10-08 16:27:05.037 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:27:05 compute-0 nova_compute[117413]: 2025-10-08 16:27:05.091 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:27:05 compute-0 nova_compute[117413]: 2025-10-08 16:27:05.092 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:27:05 compute-0 nova_compute[117413]: 2025-10-08 16:27:05.143 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727/disk --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:27:05 compute-0 nova_compute[117413]: 2025-10-08 16:27:05.272 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:27:05 compute-0 nova_compute[117413]: 2025-10-08 16:27:05.273 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:27:05 compute-0 nova_compute[117413]: 2025-10-08 16:27:05.289 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.015s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:27:05 compute-0 nova_compute[117413]: 2025-10-08 16:27:05.289 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5947MB free_disk=73.22536087036133GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:27:05 compute-0 nova_compute[117413]: 2025-10-08 16:27:05.290 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:27:05 compute-0 nova_compute[117413]: 2025-10-08 16:27:05.290 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:27:05 compute-0 ovn_controller[19768]: 2025-10-08T16:27:05Z|00125|binding|INFO|Claiming lport 544d9024-750c-48e8-83f0-2ce17e7a3048 for this chassis.
Oct 08 16:27:05 compute-0 ovn_controller[19768]: 2025-10-08T16:27:05Z|00126|binding|INFO|544d9024-750c-48e8-83f0-2ce17e7a3048: Claiming fa:16:3e:2e:44:32 10.100.0.10
Oct 08 16:27:05 compute-0 ovn_controller[19768]: 2025-10-08T16:27:05Z|00127|binding|INFO|Setting lport 544d9024-750c-48e8-83f0-2ce17e7a3048 up in Southbound
Oct 08 16:27:06 compute-0 nova_compute[117413]: 2025-10-08 16:27:06.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:06 compute-0 nova_compute[117413]: 2025-10-08 16:27:06.306 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Migration for instance 57120e80-d456-4229-84bb-f8ddc2cdbe4c refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 08 16:27:06 compute-0 nova_compute[117413]: 2025-10-08 16:27:06.551 2 INFO nova.compute.manager [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Post operation of migration started
Oct 08 16:27:06 compute-0 nova_compute[117413]: 2025-10-08 16:27:06.552 2 WARNING neutronclient.v2_0.client [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:27:06 compute-0 nova_compute[117413]: 2025-10-08 16:27:06.635 2 WARNING neutronclient.v2_0.client [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:27:06 compute-0 nova_compute[117413]: 2025-10-08 16:27:06.636 2 WARNING neutronclient.v2_0.client [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:27:06 compute-0 nova_compute[117413]: 2025-10-08 16:27:06.721 2 DEBUG oslo_concurrency.lockutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-57120e80-d456-4229-84bb-f8ddc2cdbe4c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:27:06 compute-0 nova_compute[117413]: 2025-10-08 16:27:06.722 2 DEBUG oslo_concurrency.lockutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-57120e80-d456-4229-84bb-f8ddc2cdbe4c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:27:06 compute-0 nova_compute[117413]: 2025-10-08 16:27:06.722 2 DEBUG nova.network.neutron [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:27:06 compute-0 nova_compute[117413]: 2025-10-08 16:27:06.814 2 INFO nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Updating resource usage from migration 43e55f07-6543-487a-aa05-dd139bea3202
Oct 08 16:27:06 compute-0 nova_compute[117413]: 2025-10-08 16:27:06.814 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Starting to track incoming migration 43e55f07-6543-487a-aa05-dd139bea3202 with flavor 43cd5d45-bd07-4889-a671-dd23291090c1 _update_usage_from_migration /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1536
Oct 08 16:27:07 compute-0 nova_compute[117413]: 2025-10-08 16:27:07.227 2 WARNING neutronclient.v2_0.client [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:27:07 compute-0 nova_compute[117413]: 2025-10-08 16:27:07.342 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance 7bf9717b-884a-4c47-a0d2-3d00ce297727 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:27:07 compute-0 nova_compute[117413]: 2025-10-08 16:27:07.848 2 WARNING nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance 57120e80-d456-4229-84bb-f8ddc2cdbe4c has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.
Oct 08 16:27:07 compute-0 nova_compute[117413]: 2025-10-08 16:27:07.849 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:27:07 compute-0 nova_compute[117413]: 2025-10-08 16:27:07.849 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:27:05 up 35 min,  0 user,  load average: 0.33, 0.22, 0.25\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_1820638f7dc1498db1dd11607c4370f2': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:27:07 compute-0 nova_compute[117413]: 2025-10-08 16:27:07.890 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:27:08 compute-0 nova_compute[117413]: 2025-10-08 16:27:08.152 2 WARNING neutronclient.v2_0.client [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:27:08 compute-0 nova_compute[117413]: 2025-10-08 16:27:08.280 2 DEBUG nova.network.neutron [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Updating instance_info_cache with network_info: [{"id": "544d9024-750c-48e8-83f0-2ce17e7a3048", "address": "fa:16:3e:2e:44:32", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap544d9024-75", "ovs_interfaceid": "544d9024-750c-48e8-83f0-2ce17e7a3048", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:27:08 compute-0 nova_compute[117413]: 2025-10-08 16:27:08.396 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:27:08 compute-0 nova_compute[117413]: 2025-10-08 16:27:08.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:08 compute-0 nova_compute[117413]: 2025-10-08 16:27:08.785 2 DEBUG oslo_concurrency.lockutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-57120e80-d456-4229-84bb-f8ddc2cdbe4c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:27:08 compute-0 nova_compute[117413]: 2025-10-08 16:27:08.905 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:27:08 compute-0 nova_compute[117413]: 2025-10-08 16:27:08.906 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.616s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:27:09 compute-0 nova_compute[117413]: 2025-10-08 16:27:09.309 2 DEBUG oslo_concurrency.lockutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:27:09 compute-0 nova_compute[117413]: 2025-10-08 16:27:09.309 2 DEBUG oslo_concurrency.lockutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:27:09 compute-0 nova_compute[117413]: 2025-10-08 16:27:09.310 2 DEBUG oslo_concurrency.lockutils [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:27:09 compute-0 nova_compute[117413]: 2025-10-08 16:27:09.314 2 INFO nova.virt.libvirt.driver [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 08 16:27:09 compute-0 virtqemud[117740]: Domain id=11 name='instance-0000000e' uuid=57120e80-d456-4229-84bb-f8ddc2cdbe4c is tainted: custom-monitor
Oct 08 16:27:10 compute-0 nova_compute[117413]: 2025-10-08 16:27:10.322 2 INFO nova.virt.libvirt.driver [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 08 16:27:10 compute-0 nova_compute[117413]: 2025-10-08 16:27:10.906 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:27:10 compute-0 nova_compute[117413]: 2025-10-08 16:27:10.907 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:27:10 compute-0 nova_compute[117413]: 2025-10-08 16:27:10.907 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:27:10 compute-0 nova_compute[117413]: 2025-10-08 16:27:10.908 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:27:10 compute-0 nova_compute[117413]: 2025-10-08 16:27:10.908 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:27:10 compute-0 nova_compute[117413]: 2025-10-08 16:27:10.909 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:27:11 compute-0 nova_compute[117413]: 2025-10-08 16:27:11.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:11 compute-0 nova_compute[117413]: 2025-10-08 16:27:11.330 2 INFO nova.virt.libvirt.driver [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 08 16:27:11 compute-0 nova_compute[117413]: 2025-10-08 16:27:11.335 2 DEBUG nova.compute.manager [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:27:11 compute-0 podman[146697]: 2025-10-08 16:27:11.46234791 +0000 UTC m=+0.064772025 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 08 16:27:11 compute-0 nova_compute[117413]: 2025-10-08 16:27:11.844 2 DEBUG nova.objects.instance [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Oct 08 16:27:12 compute-0 nova_compute[117413]: 2025-10-08 16:27:12.859 2 WARNING neutronclient.v2_0.client [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:27:12 compute-0 nova_compute[117413]: 2025-10-08 16:27:12.939 2 WARNING neutronclient.v2_0.client [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:27:12 compute-0 nova_compute[117413]: 2025-10-08 16:27:12.939 2 WARNING neutronclient.v2_0.client [None req-aceeca12-4514-4c10-bda1-1d72a2cad66c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:27:13 compute-0 nova_compute[117413]: 2025-10-08 16:27:13.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:16 compute-0 nova_compute[117413]: 2025-10-08 16:27:16.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:16 compute-0 podman[146717]: 2025-10-08 16:27:16.491293061 +0000 UTC m=+0.085726888 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container)
Oct 08 16:27:18 compute-0 nova_compute[117413]: 2025-10-08 16:27:18.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.021 2 DEBUG oslo_concurrency.lockutils [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "7bf9717b-884a-4c47-a0d2-3d00ce297727" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.022 2 DEBUG oslo_concurrency.lockutils [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "7bf9717b-884a-4c47-a0d2-3d00ce297727" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.022 2 DEBUG oslo_concurrency.lockutils [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "7bf9717b-884a-4c47-a0d2-3d00ce297727-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.022 2 DEBUG oslo_concurrency.lockutils [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "7bf9717b-884a-4c47-a0d2-3d00ce297727-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.022 2 DEBUG oslo_concurrency.lockutils [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "7bf9717b-884a-4c47-a0d2-3d00ce297727-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.034 2 INFO nova.compute.manager [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Terminating instance
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.550 2 DEBUG nova.compute.manager [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:27:19 compute-0 kernel: tap5e9886a9-99 (unregistering): left promiscuous mode
Oct 08 16:27:19 compute-0 NetworkManager[1034]: <info>  [1759940839.5746] device (tap5e9886a9-99): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:19 compute-0 ovn_controller[19768]: 2025-10-08T16:27:19Z|00128|binding|INFO|Releasing lport 5e9886a9-99b8-42cb-9de2-4102298b8e9e from this chassis (sb_readonly=0)
Oct 08 16:27:19 compute-0 ovn_controller[19768]: 2025-10-08T16:27:19Z|00129|binding|INFO|Setting lport 5e9886a9-99b8-42cb-9de2-4102298b8e9e down in Southbound
Oct 08 16:27:19 compute-0 ovn_controller[19768]: 2025-10-08T16:27:19Z|00130|binding|INFO|Removing iface tap5e9886a9-99 ovn-installed in OVS
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:19 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:19.591 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:a3:d8 10.100.0.4'], port_security=['fa:16:3e:27:a3:d8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7bf9717b-884a-4c47-a0d2-3d00ce297727', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1820638f7dc1498db1dd11607c4370f2', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9aaea0fc-afb8-4aa4-827a-c3a5e7706faf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ac4acf5-70fa-4592-a711-3c63ae37ec88, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=5e9886a9-99b8-42cb-9de2-4102298b8e9e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:27:19 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:19.593 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 5e9886a9-99b8-42cb-9de2-4102298b8e9e in datapath 56ad396c-4245-4eb9-9237-69e9ea6a760a unbound from our chassis
Oct 08 16:27:19 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:19.594 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 56ad396c-4245-4eb9-9237-69e9ea6a760a
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:19 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:19.613 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c68071-f56d-44b7-9a72-2f2ca800ef87]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:19 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:19.647 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[50fee557-9d86-4b60-8dc4-db5bf8ade46f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:19 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:19.649 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[5d482075-cb3a-435b-bda9-5646e6e21ce5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:19 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Oct 08 16:27:19 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000f.scope: Consumed 14.077s CPU time.
Oct 08 16:27:19 compute-0 systemd-machined[77548]: Machine qemu-10-instance-0000000f terminated.
Oct 08 16:27:19 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:19.690 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[d4752f3f-d298-4080-b0e7-a30778f4a98d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.707 2 DEBUG nova.compute.manager [req-03009a1c-36bc-407d-93c9-fd123ab11bd8 req-3e82c71e-accb-4b54-be52-835b9c4d5c19 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Received event network-vif-unplugged-5e9886a9-99b8-42cb-9de2-4102298b8e9e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.708 2 DEBUG oslo_concurrency.lockutils [req-03009a1c-36bc-407d-93c9-fd123ab11bd8 req-3e82c71e-accb-4b54-be52-835b9c4d5c19 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "7bf9717b-884a-4c47-a0d2-3d00ce297727-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.708 2 DEBUG oslo_concurrency.lockutils [req-03009a1c-36bc-407d-93c9-fd123ab11bd8 req-3e82c71e-accb-4b54-be52-835b9c4d5c19 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "7bf9717b-884a-4c47-a0d2-3d00ce297727-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.708 2 DEBUG oslo_concurrency.lockutils [req-03009a1c-36bc-407d-93c9-fd123ab11bd8 req-3e82c71e-accb-4b54-be52-835b9c4d5c19 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "7bf9717b-884a-4c47-a0d2-3d00ce297727-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.708 2 DEBUG nova.compute.manager [req-03009a1c-36bc-407d-93c9-fd123ab11bd8 req-3e82c71e-accb-4b54-be52-835b9c4d5c19 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] No waiting events found dispatching network-vif-unplugged-5e9886a9-99b8-42cb-9de2-4102298b8e9e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.709 2 DEBUG nova.compute.manager [req-03009a1c-36bc-407d-93c9-fd123ab11bd8 req-3e82c71e-accb-4b54-be52-835b9c4d5c19 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Received event network-vif-unplugged-5e9886a9-99b8-42cb-9de2-4102298b8e9e for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:27:19 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:19.713 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[839716a0-23b6-4b43-8cc4-10b86f0d1b60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap56ad396c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:36:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 207911, 'reachable_time': 33605, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 146749, 'error': None, 'target': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:19 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:19.733 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e0544e2f-1fd3-45f4-b4ef-7bab1c82f1c8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap56ad396c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 207929, 'tstamp': 207929}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 146750, 'error': None, 'target': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap56ad396c-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 207933, 'tstamp': 207933}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 146750, 'error': None, 'target': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:19 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:19.734 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56ad396c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:19 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:19.739 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap56ad396c-40, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:27:19 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:19.739 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:27:19 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:19.739 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap56ad396c-40, col_values=(('external_ids', {'iface-id': 'c11878dc-b81c-4cd4-8280-26645e84c0d9'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:27:19 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:19.740 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:27:19 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:19.741 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[a66163df-7406-4974-bb39-736259cbd4b6]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-56ad396c-4245-4eb9-9237-69e9ea6a760a\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 56ad396c-4245-4eb9-9237-69e9ea6a760a\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.823 2 INFO nova.virt.libvirt.driver [-] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Instance destroyed successfully.
Oct 08 16:27:19 compute-0 nova_compute[117413]: 2025-10-08 16:27:19.823 2 DEBUG nova.objects.instance [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lazy-loading 'resources' on Instance uuid 7bf9717b-884a-4c47-a0d2-3d00ce297727 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:27:20 compute-0 nova_compute[117413]: 2025-10-08 16:27:20.328 2 DEBUG nova.virt.libvirt.vif [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-08T16:26:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-1858845142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-1858845142',id=15,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:26:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1820638f7dc1498db1dd11607c4370f2',ramdisk_id='',reservation_id='r-lp60osjc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-1649105137',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-1649105137-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:26:32Z,user_data=None,user_id='93b0b144b7494967bce532f29a6a5c53',uuid=7bf9717b-884a-4c47-a0d2-3d00ce297727,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5e9886a9-99b8-42cb-9de2-4102298b8e9e", "address": "fa:16:3e:27:a3:d8", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e9886a9-99", "ovs_interfaceid": "5e9886a9-99b8-42cb-9de2-4102298b8e9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:27:20 compute-0 nova_compute[117413]: 2025-10-08 16:27:20.329 2 DEBUG nova.network.os_vif_util [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Converting VIF {"id": "5e9886a9-99b8-42cb-9de2-4102298b8e9e", "address": "fa:16:3e:27:a3:d8", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e9886a9-99", "ovs_interfaceid": "5e9886a9-99b8-42cb-9de2-4102298b8e9e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:27:20 compute-0 nova_compute[117413]: 2025-10-08 16:27:20.330 2 DEBUG nova.network.os_vif_util [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:a3:d8,bridge_name='br-int',has_traffic_filtering=True,id=5e9886a9-99b8-42cb-9de2-4102298b8e9e,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e9886a9-99') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:27:20 compute-0 nova_compute[117413]: 2025-10-08 16:27:20.330 2 DEBUG os_vif [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:a3:d8,bridge_name='br-int',has_traffic_filtering=True,id=5e9886a9-99b8-42cb-9de2-4102298b8e9e,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e9886a9-99') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:27:20 compute-0 nova_compute[117413]: 2025-10-08 16:27:20.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:20 compute-0 nova_compute[117413]: 2025-10-08 16:27:20.332 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e9886a9-99, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:27:20 compute-0 nova_compute[117413]: 2025-10-08 16:27:20.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:20 compute-0 nova_compute[117413]: 2025-10-08 16:27:20.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:20 compute-0 nova_compute[117413]: 2025-10-08 16:27:20.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:20 compute-0 nova_compute[117413]: 2025-10-08 16:27:20.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=cefc902c-0464-4c64-b59a-e82ec17d9907) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:27:20 compute-0 nova_compute[117413]: 2025-10-08 16:27:20.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:20 compute-0 nova_compute[117413]: 2025-10-08 16:27:20.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:20 compute-0 nova_compute[117413]: 2025-10-08 16:27:20.339 2 INFO os_vif [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:a3:d8,bridge_name='br-int',has_traffic_filtering=True,id=5e9886a9-99b8-42cb-9de2-4102298b8e9e,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e9886a9-99')
Oct 08 16:27:20 compute-0 nova_compute[117413]: 2025-10-08 16:27:20.339 2 INFO nova.virt.libvirt.driver [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Deleting instance files /var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727_del
Oct 08 16:27:20 compute-0 nova_compute[117413]: 2025-10-08 16:27:20.340 2 INFO nova.virt.libvirt.driver [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Deletion of /var/lib/nova/instances/7bf9717b-884a-4c47-a0d2-3d00ce297727_del complete
Oct 08 16:27:20 compute-0 nova_compute[117413]: 2025-10-08 16:27:20.854 2 INFO nova.compute.manager [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Took 1.30 seconds to destroy the instance on the hypervisor.
Oct 08 16:27:20 compute-0 nova_compute[117413]: 2025-10-08 16:27:20.854 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:27:20 compute-0 nova_compute[117413]: 2025-10-08 16:27:20.854 2 DEBUG nova.compute.manager [-] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:27:20 compute-0 nova_compute[117413]: 2025-10-08 16:27:20.855 2 DEBUG nova.network.neutron [-] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:27:20 compute-0 nova_compute[117413]: 2025-10-08 16:27:20.855 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:27:20 compute-0 nova_compute[117413]: 2025-10-08 16:27:20.939 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:27:21 compute-0 nova_compute[117413]: 2025-10-08 16:27:21.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:21 compute-0 nova_compute[117413]: 2025-10-08 16:27:21.245 2 DEBUG nova.compute.manager [req-e01aecb2-47f9-4cf5-bc9a-c001122dc835 req-d734dea4-fb15-47e7-aa3b-57473e83e79a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Received event network-vif-deleted-5e9886a9-99b8-42cb-9de2-4102298b8e9e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:27:21 compute-0 nova_compute[117413]: 2025-10-08 16:27:21.246 2 INFO nova.compute.manager [req-e01aecb2-47f9-4cf5-bc9a-c001122dc835 req-d734dea4-fb15-47e7-aa3b-57473e83e79a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Neutron deleted interface 5e9886a9-99b8-42cb-9de2-4102298b8e9e; detaching it from the instance and deleting it from the info cache
Oct 08 16:27:21 compute-0 nova_compute[117413]: 2025-10-08 16:27:21.246 2 DEBUG nova.network.neutron [req-e01aecb2-47f9-4cf5-bc9a-c001122dc835 req-d734dea4-fb15-47e7-aa3b-57473e83e79a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:27:21 compute-0 nova_compute[117413]: 2025-10-08 16:27:21.696 2 DEBUG nova.network.neutron [-] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:27:21 compute-0 nova_compute[117413]: 2025-10-08 16:27:21.750 2 DEBUG nova.compute.manager [req-88009bda-15f2-41a9-bc24-3ce3e7b57944 req-a3c3936e-407c-48ec-8b90-e03d65d98f83 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Received event network-vif-unplugged-5e9886a9-99b8-42cb-9de2-4102298b8e9e external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:27:21 compute-0 nova_compute[117413]: 2025-10-08 16:27:21.750 2 DEBUG oslo_concurrency.lockutils [req-88009bda-15f2-41a9-bc24-3ce3e7b57944 req-a3c3936e-407c-48ec-8b90-e03d65d98f83 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "7bf9717b-884a-4c47-a0d2-3d00ce297727-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:27:21 compute-0 nova_compute[117413]: 2025-10-08 16:27:21.751 2 DEBUG oslo_concurrency.lockutils [req-88009bda-15f2-41a9-bc24-3ce3e7b57944 req-a3c3936e-407c-48ec-8b90-e03d65d98f83 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "7bf9717b-884a-4c47-a0d2-3d00ce297727-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:27:21 compute-0 nova_compute[117413]: 2025-10-08 16:27:21.751 2 DEBUG oslo_concurrency.lockutils [req-88009bda-15f2-41a9-bc24-3ce3e7b57944 req-a3c3936e-407c-48ec-8b90-e03d65d98f83 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "7bf9717b-884a-4c47-a0d2-3d00ce297727-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:27:21 compute-0 nova_compute[117413]: 2025-10-08 16:27:21.751 2 DEBUG nova.compute.manager [req-88009bda-15f2-41a9-bc24-3ce3e7b57944 req-a3c3936e-407c-48ec-8b90-e03d65d98f83 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] No waiting events found dispatching network-vif-unplugged-5e9886a9-99b8-42cb-9de2-4102298b8e9e pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:27:21 compute-0 nova_compute[117413]: 2025-10-08 16:27:21.751 2 DEBUG nova.compute.manager [req-88009bda-15f2-41a9-bc24-3ce3e7b57944 req-a3c3936e-407c-48ec-8b90-e03d65d98f83 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Received event network-vif-unplugged-5e9886a9-99b8-42cb-9de2-4102298b8e9e for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:27:21 compute-0 nova_compute[117413]: 2025-10-08 16:27:21.755 2 DEBUG nova.compute.manager [req-e01aecb2-47f9-4cf5-bc9a-c001122dc835 req-d734dea4-fb15-47e7-aa3b-57473e83e79a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Detach interface failed, port_id=5e9886a9-99b8-42cb-9de2-4102298b8e9e, reason: Instance 7bf9717b-884a-4c47-a0d2-3d00ce297727 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 08 16:27:22 compute-0 nova_compute[117413]: 2025-10-08 16:27:22.204 2 INFO nova.compute.manager [-] [instance: 7bf9717b-884a-4c47-a0d2-3d00ce297727] Took 1.35 seconds to deallocate network for instance.
Oct 08 16:27:22 compute-0 podman[146768]: 2025-10-08 16:27:22.466326074 +0000 UTC m=+0.074861835 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4)
Oct 08 16:27:22 compute-0 nova_compute[117413]: 2025-10-08 16:27:22.723 2 DEBUG oslo_concurrency.lockutils [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:27:22 compute-0 nova_compute[117413]: 2025-10-08 16:27:22.724 2 DEBUG oslo_concurrency.lockutils [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:27:22 compute-0 nova_compute[117413]: 2025-10-08 16:27:22.778 2 DEBUG nova.compute.provider_tree [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:27:23 compute-0 nova_compute[117413]: 2025-10-08 16:27:23.287 2 DEBUG nova.scheduler.client.report [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:27:23 compute-0 nova_compute[117413]: 2025-10-08 16:27:23.806 2 DEBUG oslo_concurrency.lockutils [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.082s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:27:23 compute-0 nova_compute[117413]: 2025-10-08 16:27:23.829 2 INFO nova.scheduler.client.report [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Deleted allocations for instance 7bf9717b-884a-4c47-a0d2-3d00ce297727
Oct 08 16:27:24 compute-0 nova_compute[117413]: 2025-10-08 16:27:24.857 2 DEBUG oslo_concurrency.lockutils [None req-0bfc24bd-af16-4b29-86cc-9579d408784b 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "7bf9717b-884a-4c47-a0d2-3d00ce297727" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.835s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:27:25 compute-0 nova_compute[117413]: 2025-10-08 16:27:25.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:25 compute-0 nova_compute[117413]: 2025-10-08 16:27:25.801 2 DEBUG oslo_concurrency.lockutils [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "57120e80-d456-4229-84bb-f8ddc2cdbe4c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:27:25 compute-0 nova_compute[117413]: 2025-10-08 16:27:25.801 2 DEBUG oslo_concurrency.lockutils [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "57120e80-d456-4229-84bb-f8ddc2cdbe4c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:27:25 compute-0 nova_compute[117413]: 2025-10-08 16:27:25.801 2 DEBUG oslo_concurrency.lockutils [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "57120e80-d456-4229-84bb-f8ddc2cdbe4c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:27:25 compute-0 nova_compute[117413]: 2025-10-08 16:27:25.802 2 DEBUG oslo_concurrency.lockutils [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "57120e80-d456-4229-84bb-f8ddc2cdbe4c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:27:25 compute-0 nova_compute[117413]: 2025-10-08 16:27:25.802 2 DEBUG oslo_concurrency.lockutils [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "57120e80-d456-4229-84bb-f8ddc2cdbe4c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:27:25 compute-0 nova_compute[117413]: 2025-10-08 16:27:25.812 2 INFO nova.compute.manager [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Terminating instance
Oct 08 16:27:26 compute-0 nova_compute[117413]: 2025-10-08 16:27:26.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:26 compute-0 nova_compute[117413]: 2025-10-08 16:27:26.325 2 DEBUG nova.compute.manager [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:27:26 compute-0 kernel: tap544d9024-75 (unregistering): left promiscuous mode
Oct 08 16:27:26 compute-0 NetworkManager[1034]: <info>  [1759940846.3592] device (tap544d9024-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:27:26 compute-0 ovn_controller[19768]: 2025-10-08T16:27:26Z|00131|binding|INFO|Releasing lport 544d9024-750c-48e8-83f0-2ce17e7a3048 from this chassis (sb_readonly=0)
Oct 08 16:27:26 compute-0 ovn_controller[19768]: 2025-10-08T16:27:26Z|00132|binding|INFO|Setting lport 544d9024-750c-48e8-83f0-2ce17e7a3048 down in Southbound
Oct 08 16:27:26 compute-0 ovn_controller[19768]: 2025-10-08T16:27:26Z|00133|binding|INFO|Removing iface tap544d9024-75 ovn-installed in OVS
Oct 08 16:27:26 compute-0 nova_compute[117413]: 2025-10-08 16:27:26.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:26.372 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:44:32 10.100.0.10'], port_security=['fa:16:3e:2e:44:32 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '57120e80-d456-4229-84bb-f8ddc2cdbe4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1820638f7dc1498db1dd11607c4370f2', 'neutron:revision_number': '15', 'neutron:security_group_ids': '9aaea0fc-afb8-4aa4-827a-c3a5e7706faf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6ac4acf5-70fa-4592-a711-3c63ae37ec88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=544d9024-750c-48e8-83f0-2ce17e7a3048) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:27:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:26.373 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 544d9024-750c-48e8-83f0-2ce17e7a3048 in datapath 56ad396c-4245-4eb9-9237-69e9ea6a760a unbound from our chassis
Oct 08 16:27:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:26.374 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 56ad396c-4245-4eb9-9237-69e9ea6a760a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:27:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:26.375 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[1770dfa8-b1b8-4946-97df-33c2c3e28293]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:26.375 28633 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a namespace which is not needed anymore
Oct 08 16:27:26 compute-0 nova_compute[117413]: 2025-10-08 16:27:26.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:26 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Oct 08 16:27:26 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000e.scope: Consumed 2.426s CPU time.
Oct 08 16:27:26 compute-0 systemd-machined[77548]: Machine qemu-11-instance-0000000e terminated.
Oct 08 16:27:26 compute-0 podman[146791]: 2025-10-08 16:27:26.490944248 +0000 UTC m=+0.082986828 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4)
Oct 08 16:27:26 compute-0 nova_compute[117413]: 2025-10-08 16:27:26.502 2 DEBUG nova.compute.manager [req-a70ca75b-2789-4474-8b11-58ed021d8261 req-a90be82a-7a60-4fbd-b61a-89a28d5f3167 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Received event network-vif-unplugged-544d9024-750c-48e8-83f0-2ce17e7a3048 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:27:26 compute-0 nova_compute[117413]: 2025-10-08 16:27:26.503 2 DEBUG oslo_concurrency.lockutils [req-a70ca75b-2789-4474-8b11-58ed021d8261 req-a90be82a-7a60-4fbd-b61a-89a28d5f3167 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "57120e80-d456-4229-84bb-f8ddc2cdbe4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:27:26 compute-0 nova_compute[117413]: 2025-10-08 16:27:26.503 2 DEBUG oslo_concurrency.lockutils [req-a70ca75b-2789-4474-8b11-58ed021d8261 req-a90be82a-7a60-4fbd-b61a-89a28d5f3167 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "57120e80-d456-4229-84bb-f8ddc2cdbe4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:27:26 compute-0 nova_compute[117413]: 2025-10-08 16:27:26.503 2 DEBUG oslo_concurrency.lockutils [req-a70ca75b-2789-4474-8b11-58ed021d8261 req-a90be82a-7a60-4fbd-b61a-89a28d5f3167 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "57120e80-d456-4229-84bb-f8ddc2cdbe4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:27:26 compute-0 nova_compute[117413]: 2025-10-08 16:27:26.503 2 DEBUG nova.compute.manager [req-a70ca75b-2789-4474-8b11-58ed021d8261 req-a90be82a-7a60-4fbd-b61a-89a28d5f3167 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] No waiting events found dispatching network-vif-unplugged-544d9024-750c-48e8-83f0-2ce17e7a3048 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:27:26 compute-0 nova_compute[117413]: 2025-10-08 16:27:26.504 2 DEBUG nova.compute.manager [req-a70ca75b-2789-4474-8b11-58ed021d8261 req-a90be82a-7a60-4fbd-b61a-89a28d5f3167 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Received event network-vif-unplugged-544d9024-750c-48e8-83f0-2ce17e7a3048 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:27:26 compute-0 neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a[146400]: [NOTICE]   (146436) : haproxy version is 3.0.5-8e879a5
Oct 08 16:27:26 compute-0 neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a[146400]: [NOTICE]   (146436) : path to executable is /usr/sbin/haproxy
Oct 08 16:27:26 compute-0 neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a[146400]: [WARNING]  (146436) : Exiting Master process...
Oct 08 16:27:26 compute-0 podman[146831]: 2025-10-08 16:27:26.520308813 +0000 UTC m=+0.030265652 container kill 94b9910ccdad87c4bc292b9d57c4377431df1fbcdf361f287bbfe035c55b4b4a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251007)
Oct 08 16:27:26 compute-0 neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a[146400]: [ALERT]    (146436) : Current worker (146447) exited with code 143 (Terminated)
Oct 08 16:27:26 compute-0 neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a[146400]: [WARNING]  (146436) : All workers exited. Exiting... (0)
Oct 08 16:27:26 compute-0 systemd[1]: libpod-94b9910ccdad87c4bc292b9d57c4377431df1fbcdf361f287bbfe035c55b4b4a.scope: Deactivated successfully.
Oct 08 16:27:26 compute-0 podman[146846]: 2025-10-08 16:27:26.573769122 +0000 UTC m=+0.028990935 container died 94b9910ccdad87c4bc292b9d57c4377431df1fbcdf361f287bbfe035c55b4b4a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:27:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-94b9910ccdad87c4bc292b9d57c4377431df1fbcdf361f287bbfe035c55b4b4a-userdata-shm.mount: Deactivated successfully.
Oct 08 16:27:26 compute-0 nova_compute[117413]: 2025-10-08 16:27:26.602 2 INFO nova.virt.libvirt.driver [-] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Instance destroyed successfully.
Oct 08 16:27:26 compute-0 nova_compute[117413]: 2025-10-08 16:27:26.603 2 DEBUG nova.objects.instance [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lazy-loading 'resources' on Instance uuid 57120e80-d456-4229-84bb-f8ddc2cdbe4c obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:27:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cab9706b89f5ce662fbc618348b7c2f3ae49855d80e1ccbd628e42e4e89d9af-merged.mount: Deactivated successfully.
Oct 08 16:27:26 compute-0 podman[146846]: 2025-10-08 16:27:26.619059096 +0000 UTC m=+0.074280889 container cleanup 94b9910ccdad87c4bc292b9d57c4377431df1fbcdf361f287bbfe035c55b4b4a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 08 16:27:26 compute-0 systemd[1]: libpod-conmon-94b9910ccdad87c4bc292b9d57c4377431df1fbcdf361f287bbfe035c55b4b4a.scope: Deactivated successfully.
Oct 08 16:27:26 compute-0 podman[146849]: 2025-10-08 16:27:26.634259513 +0000 UTC m=+0.080255791 container remove 94b9910ccdad87c4bc292b9d57c4377431df1fbcdf361f287bbfe035c55b4b4a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Oct 08 16:27:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:26.648 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[34720b94-c0eb-4b2c-909e-44d8302f3bd8]: (4, ("Wed Oct  8 04:27:26 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a (94b9910ccdad87c4bc292b9d57c4377431df1fbcdf361f287bbfe035c55b4b4a)\n94b9910ccdad87c4bc292b9d57c4377431df1fbcdf361f287bbfe035c55b4b4a\nWed Oct  8 04:27:26 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a (94b9910ccdad87c4bc292b9d57c4377431df1fbcdf361f287bbfe035c55b4b4a)\n94b9910ccdad87c4bc292b9d57c4377431df1fbcdf361f287bbfe035c55b4b4a\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:26.650 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[8d453da9-b23a-424f-9bb0-8c59d2fc4312]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:26.651 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/56ad396c-4245-4eb9-9237-69e9ea6a760a.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:27:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:26.652 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[04392584-bda1-41a3-8037-3962e489cc90]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:26.652 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56ad396c-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:27:26 compute-0 kernel: tap56ad396c-40: left promiscuous mode
Oct 08 16:27:26 compute-0 nova_compute[117413]: 2025-10-08 16:27:26.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:26 compute-0 nova_compute[117413]: 2025-10-08 16:27:26.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:26.672 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e0542316-4289-4f40-a175-0024e9e86dac]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:26.708 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[9de12f9c-b0f8-4d51-8c63-ba3ae0f626b7]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:26.709 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[6c2465da-992a-47f0-be6f-7c57c08ca8a3]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:26.726 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[ae777789-6ff1-4bc6-b609-5c93a1578f4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 207903, 'reachable_time': 18847, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 146899, 'error': None, 'target': 'ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:26.728 28777 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-56ad396c-4245-4eb9-9237-69e9ea6a760a deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 08 16:27:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:26.728 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[91457551-8632-433a-8988-1236fc4af94f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d56ad396c\x2d4245\x2d4eb9\x2d9237\x2d69e9ea6a760a.mount: Deactivated successfully.
Oct 08 16:27:27 compute-0 nova_compute[117413]: 2025-10-08 16:27:27.110 2 DEBUG nova.virt.libvirt.vif [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,compute_id=2,config_drive='True',created_at=2025-10-08T16:25:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-1981828110',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-1981828110',id=14,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:26:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1820638f7dc1498db1dd11607c4370f2',ramdisk_id='',reservation_id='r-yp0549gf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',clean_attempts='1',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-1649105137',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-1649105137-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:27:12Z,user_data=None,user_id='93b0b144b7494967bce532f29a6a5c53',uuid=57120e80-d456-4229-84bb-f8ddc2cdbe4c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "544d9024-750c-48e8-83f0-2ce17e7a3048", "address": "fa:16:3e:2e:44:32", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap544d9024-75", "ovs_interfaceid": "544d9024-750c-48e8-83f0-2ce17e7a3048", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:27:27 compute-0 nova_compute[117413]: 2025-10-08 16:27:27.110 2 DEBUG nova.network.os_vif_util [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Converting VIF {"id": "544d9024-750c-48e8-83f0-2ce17e7a3048", "address": "fa:16:3e:2e:44:32", "network": {"id": "56ad396c-4245-4eb9-9237-69e9ea6a760a", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-389167926-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7ecc0bf1374713b56acbc9eabf6d9e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap544d9024-75", "ovs_interfaceid": "544d9024-750c-48e8-83f0-2ce17e7a3048", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:27:27 compute-0 nova_compute[117413]: 2025-10-08 16:27:27.111 2 DEBUG nova.network.os_vif_util [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2e:44:32,bridge_name='br-int',has_traffic_filtering=True,id=544d9024-750c-48e8-83f0-2ce17e7a3048,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap544d9024-75') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:27:27 compute-0 nova_compute[117413]: 2025-10-08 16:27:27.112 2 DEBUG os_vif [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:44:32,bridge_name='br-int',has_traffic_filtering=True,id=544d9024-750c-48e8-83f0-2ce17e7a3048,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap544d9024-75') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:27:27 compute-0 nova_compute[117413]: 2025-10-08 16:27:27.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:27 compute-0 nova_compute[117413]: 2025-10-08 16:27:27.114 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap544d9024-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:27:27 compute-0 nova_compute[117413]: 2025-10-08 16:27:27.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:27 compute-0 nova_compute[117413]: 2025-10-08 16:27:27.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:27 compute-0 nova_compute[117413]: 2025-10-08 16:27:27.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:27 compute-0 nova_compute[117413]: 2025-10-08 16:27:27.117 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=3b86baaa-9cd7-4100-aa9f-a61967e1fe97) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:27:27 compute-0 nova_compute[117413]: 2025-10-08 16:27:27.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:27 compute-0 nova_compute[117413]: 2025-10-08 16:27:27.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:27 compute-0 nova_compute[117413]: 2025-10-08 16:27:27.121 2 INFO os_vif [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:44:32,bridge_name='br-int',has_traffic_filtering=True,id=544d9024-750c-48e8-83f0-2ce17e7a3048,network=Network(56ad396c-4245-4eb9-9237-69e9ea6a760a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap544d9024-75')
Oct 08 16:27:27 compute-0 nova_compute[117413]: 2025-10-08 16:27:27.122 2 INFO nova.virt.libvirt.driver [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Deleting instance files /var/lib/nova/instances/57120e80-d456-4229-84bb-f8ddc2cdbe4c_del
Oct 08 16:27:27 compute-0 nova_compute[117413]: 2025-10-08 16:27:27.122 2 INFO nova.virt.libvirt.driver [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Deletion of /var/lib/nova/instances/57120e80-d456-4229-84bb-f8ddc2cdbe4c_del complete
Oct 08 16:27:27 compute-0 nova_compute[117413]: 2025-10-08 16:27:27.633 2 INFO nova.compute.manager [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Took 1.31 seconds to destroy the instance on the hypervisor.
Oct 08 16:27:27 compute-0 nova_compute[117413]: 2025-10-08 16:27:27.633 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:27:27 compute-0 nova_compute[117413]: 2025-10-08 16:27:27.633 2 DEBUG nova.compute.manager [-] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:27:27 compute-0 nova_compute[117413]: 2025-10-08 16:27:27.634 2 DEBUG nova.network.neutron [-] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:27:27 compute-0 nova_compute[117413]: 2025-10-08 16:27:27.634 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:27:27 compute-0 nova_compute[117413]: 2025-10-08 16:27:27.938 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:27:28 compute-0 nova_compute[117413]: 2025-10-08 16:27:28.570 2 DEBUG nova.compute.manager [req-ffff50c9-492b-4d3b-a8ad-686f3f61aeb6 req-eee5454c-fd13-4e85-9c0e-9f02e21978ed c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Received event network-vif-unplugged-544d9024-750c-48e8-83f0-2ce17e7a3048 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:27:28 compute-0 nova_compute[117413]: 2025-10-08 16:27:28.570 2 DEBUG oslo_concurrency.lockutils [req-ffff50c9-492b-4d3b-a8ad-686f3f61aeb6 req-eee5454c-fd13-4e85-9c0e-9f02e21978ed c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "57120e80-d456-4229-84bb-f8ddc2cdbe4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:27:28 compute-0 nova_compute[117413]: 2025-10-08 16:27:28.571 2 DEBUG oslo_concurrency.lockutils [req-ffff50c9-492b-4d3b-a8ad-686f3f61aeb6 req-eee5454c-fd13-4e85-9c0e-9f02e21978ed c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "57120e80-d456-4229-84bb-f8ddc2cdbe4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:27:28 compute-0 nova_compute[117413]: 2025-10-08 16:27:28.571 2 DEBUG oslo_concurrency.lockutils [req-ffff50c9-492b-4d3b-a8ad-686f3f61aeb6 req-eee5454c-fd13-4e85-9c0e-9f02e21978ed c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "57120e80-d456-4229-84bb-f8ddc2cdbe4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:27:28 compute-0 nova_compute[117413]: 2025-10-08 16:27:28.571 2 DEBUG nova.compute.manager [req-ffff50c9-492b-4d3b-a8ad-686f3f61aeb6 req-eee5454c-fd13-4e85-9c0e-9f02e21978ed c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] No waiting events found dispatching network-vif-unplugged-544d9024-750c-48e8-83f0-2ce17e7a3048 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:27:28 compute-0 nova_compute[117413]: 2025-10-08 16:27:28.571 2 DEBUG nova.compute.manager [req-ffff50c9-492b-4d3b-a8ad-686f3f61aeb6 req-eee5454c-fd13-4e85-9c0e-9f02e21978ed c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Received event network-vif-unplugged-544d9024-750c-48e8-83f0-2ce17e7a3048 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:27:28 compute-0 nova_compute[117413]: 2025-10-08 16:27:28.572 2 DEBUG nova.compute.manager [req-ffff50c9-492b-4d3b-a8ad-686f3f61aeb6 req-eee5454c-fd13-4e85-9c0e-9f02e21978ed c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Received event network-vif-deleted-544d9024-750c-48e8-83f0-2ce17e7a3048 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:27:28 compute-0 nova_compute[117413]: 2025-10-08 16:27:28.572 2 INFO nova.compute.manager [req-ffff50c9-492b-4d3b-a8ad-686f3f61aeb6 req-eee5454c-fd13-4e85-9c0e-9f02e21978ed c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Neutron deleted interface 544d9024-750c-48e8-83f0-2ce17e7a3048; detaching it from the instance and deleting it from the info cache
Oct 08 16:27:28 compute-0 nova_compute[117413]: 2025-10-08 16:27:28.572 2 DEBUG nova.network.neutron [req-ffff50c9-492b-4d3b-a8ad-686f3f61aeb6 req-eee5454c-fd13-4e85-9c0e-9f02e21978ed c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:27:28 compute-0 nova_compute[117413]: 2025-10-08 16:27:28.687 2 DEBUG nova.network.neutron [-] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:27:29 compute-0 nova_compute[117413]: 2025-10-08 16:27:29.079 2 DEBUG nova.compute.manager [req-ffff50c9-492b-4d3b-a8ad-686f3f61aeb6 req-eee5454c-fd13-4e85-9c0e-9f02e21978ed c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Detach interface failed, port_id=544d9024-750c-48e8-83f0-2ce17e7a3048, reason: Instance 57120e80-d456-4229-84bb-f8ddc2cdbe4c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 08 16:27:29 compute-0 nova_compute[117413]: 2025-10-08 16:27:29.193 2 INFO nova.compute.manager [-] [instance: 57120e80-d456-4229-84bb-f8ddc2cdbe4c] Took 1.56 seconds to deallocate network for instance.
Oct 08 16:27:29 compute-0 nova_compute[117413]: 2025-10-08 16:27:29.714 2 DEBUG oslo_concurrency.lockutils [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:27:29 compute-0 nova_compute[117413]: 2025-10-08 16:27:29.715 2 DEBUG oslo_concurrency.lockutils [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:27:29 compute-0 nova_compute[117413]: 2025-10-08 16:27:29.720 2 DEBUG oslo_concurrency.lockutils [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:27:29 compute-0 nova_compute[117413]: 2025-10-08 16:27:29.745 2 INFO nova.scheduler.client.report [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Deleted allocations for instance 57120e80-d456-4229-84bb-f8ddc2cdbe4c
Oct 08 16:27:29 compute-0 podman[127881]: time="2025-10-08T16:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:27:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:27:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3022 "" "Go-http-client/1.1"
Oct 08 16:27:30 compute-0 nova_compute[117413]: 2025-10-08 16:27:30.772 2 DEBUG oslo_concurrency.lockutils [None req-179736e1-af16-4d3e-8479-3b5964b7cf28 93b0b144b7494967bce532f29a6a5c53 1820638f7dc1498db1dd11607c4370f2 - - default default] Lock "57120e80-d456-4229-84bb-f8ddc2cdbe4c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.971s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:27:31 compute-0 nova_compute[117413]: 2025-10-08 16:27:31.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:31 compute-0 openstack_network_exporter[130039]: ERROR   16:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:27:31 compute-0 openstack_network_exporter[130039]: ERROR   16:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:27:31 compute-0 openstack_network_exporter[130039]: ERROR   16:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:27:31 compute-0 openstack_network_exporter[130039]: ERROR   16:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:27:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:27:31 compute-0 openstack_network_exporter[130039]: ERROR   16:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:27:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:27:32 compute-0 nova_compute[117413]: 2025-10-08 16:27:32.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:32 compute-0 podman[146900]: 2025-10-08 16:27:32.452961155 +0000 UTC m=+0.057111054 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 08 16:27:32 compute-0 podman[146901]: 2025-10-08 16:27:32.494174042 +0000 UTC m=+0.094348617 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251007, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 08 16:27:36 compute-0 nova_compute[117413]: 2025-10-08 16:27:36.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:36 compute-0 nova_compute[117413]: 2025-10-08 16:27:36.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:37 compute-0 nova_compute[117413]: 2025-10-08 16:27:37.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:41 compute-0 nova_compute[117413]: 2025-10-08 16:27:41.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:41.905 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:27:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:41.906 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:27:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:41.906 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:27:42 compute-0 nova_compute[117413]: 2025-10-08 16:27:42.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:42 compute-0 podman[146951]: 2025-10-08 16:27:42.444617144 +0000 UTC m=+0.048973371 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:27:46 compute-0 nova_compute[117413]: 2025-10-08 16:27:46.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:47 compute-0 nova_compute[117413]: 2025-10-08 16:27:47.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:47 compute-0 podman[146972]: 2025-10-08 16:27:47.453047115 +0000 UTC m=+0.062270773 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-type=git, config_id=edpm, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 08 16:27:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:49.984 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:0e:1e 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c7d8211aa56344219a4778e4641775b2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a57d6524-8805-463f-b41a-d3218c332981, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=4ea2ba2c-a72c-4801-bc39-9e3f03f24d1c) old=Port_Binding(mac=['fa:16:3e:cb:0e:1e'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c7d8211aa56344219a4778e4641775b2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:27:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:49.984 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 4ea2ba2c-a72c-4801-bc39-9e3f03f24d1c in datapath eaa04398-576e-4a18-a2fe-a6a0b2d52eea updated
Oct 08 16:27:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:49.985 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eaa04398-576e-4a18-a2fe-a6a0b2d52eea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:27:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:27:49.986 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[b566a04a-f452-489d-80ce-09b2d016fc1d]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:27:51 compute-0 nova_compute[117413]: 2025-10-08 16:27:51.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:52 compute-0 nova_compute[117413]: 2025-10-08 16:27:52.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:53 compute-0 podman[146992]: 2025-10-08 16:27:53.439370801 +0000 UTC m=+0.049427394 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4)
Oct 08 16:27:56 compute-0 nova_compute[117413]: 2025-10-08 16:27:56.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:56 compute-0 podman[147013]: 2025-10-08 16:27:56.693392026 +0000 UTC m=+0.046925271 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:27:57 compute-0 nova_compute[117413]: 2025-10-08 16:27:57.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:27:59 compute-0 podman[127881]: time="2025-10-08T16:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:27:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:27:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3024 "" "Go-http-client/1.1"
Oct 08 16:28:01 compute-0 nova_compute[117413]: 2025-10-08 16:28:01.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:01.408 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:73:2b 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-60917893-9548-4182-96aa-5c5f37a42452', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60917893-9548-4182-96aa-5c5f37a42452', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '621f620ded214ac792354cb32ce3de49', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fc2d01b5-2ad5-42c6-8fd8-6dd64954512e, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=43300ec3-a2cc-4721-a803-151667f2c676) old=Port_Binding(mac=['fa:16:3e:a6:73:2b'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-60917893-9548-4182-96aa-5c5f37a42452', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60917893-9548-4182-96aa-5c5f37a42452', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '621f620ded214ac792354cb32ce3de49', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:28:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:01.408 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 43300ec3-a2cc-4721-a803-151667f2c676 in datapath 60917893-9548-4182-96aa-5c5f37a42452 updated
Oct 08 16:28:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:01.409 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60917893-9548-4182-96aa-5c5f37a42452, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:28:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:01.410 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[124043ea-8487-4f4b-93ae-b0406c5ecd17]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:28:01 compute-0 openstack_network_exporter[130039]: ERROR   16:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:28:01 compute-0 openstack_network_exporter[130039]: ERROR   16:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:28:01 compute-0 openstack_network_exporter[130039]: ERROR   16:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:28:01 compute-0 openstack_network_exporter[130039]: ERROR   16:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:28:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:28:01 compute-0 openstack_network_exporter[130039]: ERROR   16:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:28:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:28:02 compute-0 nova_compute[117413]: 2025-10-08 16:28:02.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:02 compute-0 nova_compute[117413]: 2025-10-08 16:28:02.360 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:28:03 compute-0 podman[147033]: 2025-10-08 16:28:03.450629841 +0000 UTC m=+0.056127306 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 08 16:28:03 compute-0 podman[147034]: 2025-10-08 16:28:03.514824899 +0000 UTC m=+0.113356914 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller)
Oct 08 16:28:03 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:03.519 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:28:03 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:03.519 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:28:03 compute-0 nova_compute[117413]: 2025-10-08 16:28:03.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:04 compute-0 nova_compute[117413]: 2025-10-08 16:28:04.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:28:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:04.521 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:28:05 compute-0 nova_compute[117413]: 2025-10-08 16:28:05.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:28:05 compute-0 nova_compute[117413]: 2025-10-08 16:28:05.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:28:05 compute-0 nova_compute[117413]: 2025-10-08 16:28:05.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:28:05 compute-0 nova_compute[117413]: 2025-10-08 16:28:05.873 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:28:05 compute-0 nova_compute[117413]: 2025-10-08 16:28:05.873 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:28:05 compute-0 nova_compute[117413]: 2025-10-08 16:28:05.874 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:28:05 compute-0 nova_compute[117413]: 2025-10-08 16:28:05.874 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:28:06 compute-0 nova_compute[117413]: 2025-10-08 16:28:06.025 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:28:06 compute-0 nova_compute[117413]: 2025-10-08 16:28:06.026 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:28:06 compute-0 nova_compute[117413]: 2025-10-08 16:28:06.047 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.020s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:28:06 compute-0 nova_compute[117413]: 2025-10-08 16:28:06.047 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6174MB free_disk=73.25473403930664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:28:06 compute-0 nova_compute[117413]: 2025-10-08 16:28:06.048 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:28:06 compute-0 nova_compute[117413]: 2025-10-08 16:28:06.048 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:28:06 compute-0 nova_compute[117413]: 2025-10-08 16:28:06.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:07 compute-0 nova_compute[117413]: 2025-10-08 16:28:07.097 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:28:07 compute-0 nova_compute[117413]: 2025-10-08 16:28:07.097 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:28:06 up 36 min,  0 user,  load average: 0.28, 0.23, 0.25\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:28:07 compute-0 nova_compute[117413]: 2025-10-08 16:28:07.116 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:28:07 compute-0 nova_compute[117413]: 2025-10-08 16:28:07.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:07 compute-0 nova_compute[117413]: 2025-10-08 16:28:07.623 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:28:08 compute-0 nova_compute[117413]: 2025-10-08 16:28:08.133 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:28:08 compute-0 nova_compute[117413]: 2025-10-08 16:28:08.133 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.085s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:28:09 compute-0 nova_compute[117413]: 2025-10-08 16:28:09.134 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:28:09 compute-0 nova_compute[117413]: 2025-10-08 16:28:09.134 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:28:09 compute-0 nova_compute[117413]: 2025-10-08 16:28:09.134 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:28:10 compute-0 nova_compute[117413]: 2025-10-08 16:28:10.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:28:10 compute-0 ovn_controller[19768]: 2025-10-08T16:28:10Z|00134|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct 08 16:28:11 compute-0 nova_compute[117413]: 2025-10-08 16:28:11.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:12 compute-0 nova_compute[117413]: 2025-10-08 16:28:12.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:13 compute-0 podman[147085]: 2025-10-08 16:28:13.446211779 +0000 UTC m=+0.052382618 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.4)
Oct 08 16:28:15 compute-0 nova_compute[117413]: 2025-10-08 16:28:15.358 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:28:16 compute-0 nova_compute[117413]: 2025-10-08 16:28:16.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:17 compute-0 nova_compute[117413]: 2025-10-08 16:28:17.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:18 compute-0 podman[147106]: 2025-10-08 16:28:18.456748421 +0000 UTC m=+0.058816704 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 08 16:28:21 compute-0 nova_compute[117413]: 2025-10-08 16:28:21.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:22 compute-0 nova_compute[117413]: 2025-10-08 16:28:22.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:24 compute-0 podman[147127]: 2025-10-08 16:28:24.476023296 +0000 UTC m=+0.082234038 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=iscsid, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:28:26 compute-0 nova_compute[117413]: 2025-10-08 16:28:26.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:27 compute-0 nova_compute[117413]: 2025-10-08 16:28:27.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:27 compute-0 podman[147147]: 2025-10-08 16:28:27.471950994 +0000 UTC m=+0.070167201 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251007)
Oct 08 16:28:29 compute-0 podman[127881]: time="2025-10-08T16:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:28:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:28:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3028 "" "Go-http-client/1.1"
Oct 08 16:28:31 compute-0 nova_compute[117413]: 2025-10-08 16:28:31.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:31 compute-0 openstack_network_exporter[130039]: ERROR   16:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:28:31 compute-0 openstack_network_exporter[130039]: ERROR   16:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:28:31 compute-0 openstack_network_exporter[130039]: ERROR   16:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:28:31 compute-0 openstack_network_exporter[130039]: ERROR   16:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:28:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:28:31 compute-0 openstack_network_exporter[130039]: ERROR   16:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:28:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:28:32 compute-0 nova_compute[117413]: 2025-10-08 16:28:32.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:34 compute-0 podman[147166]: 2025-10-08 16:28:34.466009774 +0000 UTC m=+0.067010349 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 08 16:28:34 compute-0 podman[147167]: 2025-10-08 16:28:34.477845625 +0000 UTC m=+0.086631774 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 08 16:28:35 compute-0 nova_compute[117413]: 2025-10-08 16:28:35.805 2 DEBUG oslo_concurrency.lockutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "bff67805-35b2-4e36-9e58-3785c94133e4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:28:35 compute-0 nova_compute[117413]: 2025-10-08 16:28:35.805 2 DEBUG oslo_concurrency.lockutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "bff67805-35b2-4e36-9e58-3785c94133e4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:28:36 compute-0 nova_compute[117413]: 2025-10-08 16:28:36.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:36 compute-0 nova_compute[117413]: 2025-10-08 16:28:36.311 2 DEBUG nova.compute.manager [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 08 16:28:36 compute-0 nova_compute[117413]: 2025-10-08 16:28:36.859 2 DEBUG oslo_concurrency.lockutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:28:36 compute-0 nova_compute[117413]: 2025-10-08 16:28:36.859 2 DEBUG oslo_concurrency.lockutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:28:36 compute-0 nova_compute[117413]: 2025-10-08 16:28:36.866 2 DEBUG nova.virt.hardware [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 08 16:28:36 compute-0 nova_compute[117413]: 2025-10-08 16:28:36.867 2 INFO nova.compute.claims [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Claim successful on node compute-0.ctlplane.example.com
Oct 08 16:28:37 compute-0 nova_compute[117413]: 2025-10-08 16:28:37.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:37 compute-0 nova_compute[117413]: 2025-10-08 16:28:37.916 2 DEBUG nova.compute.provider_tree [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:28:38 compute-0 nova_compute[117413]: 2025-10-08 16:28:38.427 2 DEBUG nova.scheduler.client.report [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:28:38 compute-0 nova_compute[117413]: 2025-10-08 16:28:38.939 2 DEBUG oslo_concurrency.lockutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.080s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:28:38 compute-0 nova_compute[117413]: 2025-10-08 16:28:38.940 2 DEBUG nova.compute.manager [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 08 16:28:39 compute-0 nova_compute[117413]: 2025-10-08 16:28:39.451 2 DEBUG nova.compute.manager [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 08 16:28:39 compute-0 nova_compute[117413]: 2025-10-08 16:28:39.452 2 DEBUG nova.network.neutron [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 08 16:28:39 compute-0 nova_compute[117413]: 2025-10-08 16:28:39.452 2 WARNING neutronclient.v2_0.client [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:28:39 compute-0 nova_compute[117413]: 2025-10-08 16:28:39.452 2 WARNING neutronclient.v2_0.client [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:28:39 compute-0 nova_compute[117413]: 2025-10-08 16:28:39.961 2 INFO nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 08 16:28:40 compute-0 nova_compute[117413]: 2025-10-08 16:28:40.476 2 DEBUG nova.compute.manager [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.501 2 DEBUG nova.compute.manager [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.502 2 DEBUG nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.503 2 INFO nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Creating image(s)
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.503 2 DEBUG oslo_concurrency.lockutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "/var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.503 2 DEBUG oslo_concurrency.lockutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "/var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.504 2 DEBUG oslo_concurrency.lockutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "/var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.505 2 DEBUG oslo_utils.imageutils.format_inspector [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.508 2 DEBUG oslo_utils.imageutils.format_inspector [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.510 2 DEBUG oslo_concurrency.processutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.565 2 DEBUG oslo_concurrency.processutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.566 2 DEBUG oslo_concurrency.lockutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.567 2 DEBUG oslo_concurrency.lockutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.567 2 DEBUG oslo_utils.imageutils.format_inspector [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.571 2 DEBUG oslo_utils.imageutils.format_inspector [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.571 2 DEBUG oslo_concurrency.processutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.630 2 DEBUG oslo_concurrency.processutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.631 2 DEBUG oslo_concurrency.processutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.674 2 DEBUG oslo_concurrency.processutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4/disk 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.676 2 DEBUG oslo_concurrency.lockutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.109s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.676 2 DEBUG oslo_concurrency.processutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.692 2 DEBUG nova.network.neutron [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Successfully created port: 385bee6e-74ee-4c23-9048-bca81208f18b _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.734 2 DEBUG oslo_concurrency.processutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.735 2 DEBUG nova.virt.disk.api [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Checking if we can resize image /var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.735 2 DEBUG oslo_concurrency.processutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.789 2 DEBUG oslo_concurrency.processutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.790 2 DEBUG nova.virt.disk.api [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Cannot resize image /var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.790 2 DEBUG nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.791 2 DEBUG nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Ensure instance console log exists: /var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.791 2 DEBUG oslo_concurrency.lockutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.792 2 DEBUG oslo_concurrency.lockutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:28:41 compute-0 nova_compute[117413]: 2025-10-08 16:28:41.792 2 DEBUG oslo_concurrency.lockutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:28:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:41.907 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:28:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:41.908 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:28:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:41.908 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:28:42 compute-0 nova_compute[117413]: 2025-10-08 16:28:42.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:42 compute-0 nova_compute[117413]: 2025-10-08 16:28:42.655 2 DEBUG nova.network.neutron [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Successfully updated port: 385bee6e-74ee-4c23-9048-bca81208f18b _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 08 16:28:42 compute-0 nova_compute[117413]: 2025-10-08 16:28:42.699 2 DEBUG nova.compute.manager [req-fdc26125-8adb-4121-8990-bd331f32c874 req-d1815368-8f81-4193-b333-faac1dfd6f86 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Received event network-changed-385bee6e-74ee-4c23-9048-bca81208f18b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:28:42 compute-0 nova_compute[117413]: 2025-10-08 16:28:42.700 2 DEBUG nova.compute.manager [req-fdc26125-8adb-4121-8990-bd331f32c874 req-d1815368-8f81-4193-b333-faac1dfd6f86 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Refreshing instance network info cache due to event network-changed-385bee6e-74ee-4c23-9048-bca81208f18b. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 08 16:28:42 compute-0 nova_compute[117413]: 2025-10-08 16:28:42.700 2 DEBUG oslo_concurrency.lockutils [req-fdc26125-8adb-4121-8990-bd331f32c874 req-d1815368-8f81-4193-b333-faac1dfd6f86 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-bff67805-35b2-4e36-9e58-3785c94133e4" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:28:42 compute-0 nova_compute[117413]: 2025-10-08 16:28:42.700 2 DEBUG oslo_concurrency.lockutils [req-fdc26125-8adb-4121-8990-bd331f32c874 req-d1815368-8f81-4193-b333-faac1dfd6f86 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-bff67805-35b2-4e36-9e58-3785c94133e4" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:28:42 compute-0 nova_compute[117413]: 2025-10-08 16:28:42.700 2 DEBUG nova.network.neutron [req-fdc26125-8adb-4121-8990-bd331f32c874 req-d1815368-8f81-4193-b333-faac1dfd6f86 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Refreshing network info cache for port 385bee6e-74ee-4c23-9048-bca81208f18b _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 08 16:28:43 compute-0 nova_compute[117413]: 2025-10-08 16:28:43.161 2 DEBUG oslo_concurrency.lockutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "refresh_cache-bff67805-35b2-4e36-9e58-3785c94133e4" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:28:43 compute-0 nova_compute[117413]: 2025-10-08 16:28:43.205 2 WARNING neutronclient.v2_0.client [req-fdc26125-8adb-4121-8990-bd331f32c874 req-d1815368-8f81-4193-b333-faac1dfd6f86 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:28:43 compute-0 nova_compute[117413]: 2025-10-08 16:28:43.983 2 DEBUG nova.network.neutron [req-fdc26125-8adb-4121-8990-bd331f32c874 req-d1815368-8f81-4193-b333-faac1dfd6f86 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:28:44 compute-0 nova_compute[117413]: 2025-10-08 16:28:44.137 2 DEBUG nova.network.neutron [req-fdc26125-8adb-4121-8990-bd331f32c874 req-d1815368-8f81-4193-b333-faac1dfd6f86 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:28:44 compute-0 podman[147233]: 2025-10-08 16:28:44.477018697 +0000 UTC m=+0.081421185 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:28:44 compute-0 nova_compute[117413]: 2025-10-08 16:28:44.645 2 DEBUG oslo_concurrency.lockutils [req-fdc26125-8adb-4121-8990-bd331f32c874 req-d1815368-8f81-4193-b333-faac1dfd6f86 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-bff67805-35b2-4e36-9e58-3785c94133e4" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:28:44 compute-0 nova_compute[117413]: 2025-10-08 16:28:44.646 2 DEBUG oslo_concurrency.lockutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquired lock "refresh_cache-bff67805-35b2-4e36-9e58-3785c94133e4" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:28:44 compute-0 nova_compute[117413]: 2025-10-08 16:28:44.646 2 DEBUG nova.network.neutron [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:28:45 compute-0 nova_compute[117413]: 2025-10-08 16:28:45.979 2 DEBUG nova.network.neutron [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:28:46 compute-0 nova_compute[117413]: 2025-10-08 16:28:46.175 2 WARNING neutronclient.v2_0.client [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:28:46 compute-0 nova_compute[117413]: 2025-10-08 16:28:46.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:46 compute-0 nova_compute[117413]: 2025-10-08 16:28:46.563 2 DEBUG nova.network.neutron [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Updating instance_info_cache with network_info: [{"id": "385bee6e-74ee-4c23-9048-bca81208f18b", "address": "fa:16:3e:8f:8f:2d", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap385bee6e-74", "ovs_interfaceid": "385bee6e-74ee-4c23-9048-bca81208f18b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.072 2 DEBUG oslo_concurrency.lockutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Releasing lock "refresh_cache-bff67805-35b2-4e36-9e58-3785c94133e4" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.072 2 DEBUG nova.compute.manager [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Instance network_info: |[{"id": "385bee6e-74ee-4c23-9048-bca81208f18b", "address": "fa:16:3e:8f:8f:2d", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap385bee6e-74", "ovs_interfaceid": "385bee6e-74ee-4c23-9048-bca81208f18b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.075 2 DEBUG nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Start _get_guest_xml network_info=[{"id": "385bee6e-74ee-4c23-9048-bca81208f18b", "address": "fa:16:3e:8f:8f:2d", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap385bee6e-74", "ovs_interfaceid": "385bee6e-74ee-4c23-9048-bca81208f18b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'boot_index': 0, 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': '44390e9d-4b05-4916-9ba9-97b19c79ef43'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.079 2 WARNING nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.081 2 DEBUG nova.virt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='44390e9d-4b05-4916-9ba9-97b19c79ef43', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-912600583', uuid='bff67805-35b2-4e36-9e58-3785c94133e4'), owner=OwnerMeta(userid='a35a495eee564e31a6dce3a5c601665c', username='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320-project-admin', projectid='621f620ded214ac792354cb32ce3de49', projectname='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320'), image=ImageMeta(id='44390e9d-4b05-4916-9ba9-97b19c79ef43', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='43cd5d45-bd07-4889-a671-dd23291090c1', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "385bee6e-74ee-4c23-9048-bca81208f18b", "address": "fa:16:3e:8f:8f:2d", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap385bee6e-74", "ovs_interfaceid": "385bee6e-74ee-4c23-9048-bca81208f18b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008114656.23cad1d.el10', creation_time=1759940927.081348) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.087 2 DEBUG nova.virt.libvirt.host [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.088 2 DEBUG nova.virt.libvirt.host [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.091 2 DEBUG nova.virt.libvirt.host [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.092 2 DEBUG nova.virt.libvirt.host [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.093 2 DEBUG nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.093 2 DEBUG nova.virt.hardware [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-08T16:08:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='43cd5d45-bd07-4889-a671-dd23291090c1',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.094 2 DEBUG nova.virt.hardware [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.094 2 DEBUG nova.virt.hardware [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.094 2 DEBUG nova.virt.hardware [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.094 2 DEBUG nova.virt.hardware [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.094 2 DEBUG nova.virt.hardware [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.095 2 DEBUG nova.virt.hardware [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.095 2 DEBUG nova.virt.hardware [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.095 2 DEBUG nova.virt.hardware [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.096 2 DEBUG nova.virt.hardware [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.096 2 DEBUG nova.virt.hardware [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.100 2 DEBUG nova.virt.libvirt.vif [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:28:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-912600583',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutenoderesourceconsolidationstrategy-server-912',id=17,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='621f620ded214ac792354cb32ce3de49',ramdisk_id='',reservation_id='r-2ncfz88w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320',owner_user_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:28:40Z,user_data=None,user_id='a35a495eee564e31a6dce3a5c601665c',uuid=bff67805-35b2-4e36-9e58-3785c94133e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "385bee6e-74ee-4c23-9048-bca81208f18b", "address": "fa:16:3e:8f:8f:2d", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap385bee6e-74", "ovs_interfaceid": "385bee6e-74ee-4c23-9048-bca81208f18b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.101 2 DEBUG nova.network.os_vif_util [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Converting VIF {"id": "385bee6e-74ee-4c23-9048-bca81208f18b", "address": "fa:16:3e:8f:8f:2d", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap385bee6e-74", "ovs_interfaceid": "385bee6e-74ee-4c23-9048-bca81208f18b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.102 2 DEBUG nova.network.os_vif_util [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:8f:2d,bridge_name='br-int',has_traffic_filtering=True,id=385bee6e-74ee-4c23-9048-bca81208f18b,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap385bee6e-74') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.103 2 DEBUG nova.objects.instance [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lazy-loading 'pci_devices' on Instance uuid bff67805-35b2-4e36-9e58-3785c94133e4 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.611 2 DEBUG nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] End _get_guest_xml xml=<domain type="kvm">
Oct 08 16:28:47 compute-0 nova_compute[117413]:   <uuid>bff67805-35b2-4e36-9e58-3785c94133e4</uuid>
Oct 08 16:28:47 compute-0 nova_compute[117413]:   <name>instance-00000011</name>
Oct 08 16:28:47 compute-0 nova_compute[117413]:   <memory>131072</memory>
Oct 08 16:28:47 compute-0 nova_compute[117413]:   <vcpu>1</vcpu>
Oct 08 16:28:47 compute-0 nova_compute[117413]:   <metadata>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <nova:package version="32.1.0-0.20251008114656.23cad1d.el10"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <nova:name>tempest-TestExecuteNodeResourceConsolidationStrategy-server-912600583</nova:name>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <nova:creationTime>2025-10-08 16:28:47</nova:creationTime>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <nova:flavor name="m1.nano" id="43cd5d45-bd07-4889-a671-dd23291090c1">
Oct 08 16:28:47 compute-0 nova_compute[117413]:         <nova:memory>128</nova:memory>
Oct 08 16:28:47 compute-0 nova_compute[117413]:         <nova:disk>1</nova:disk>
Oct 08 16:28:47 compute-0 nova_compute[117413]:         <nova:swap>0</nova:swap>
Oct 08 16:28:47 compute-0 nova_compute[117413]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 16:28:47 compute-0 nova_compute[117413]:         <nova:vcpus>1</nova:vcpus>
Oct 08 16:28:47 compute-0 nova_compute[117413]:         <nova:extraSpecs>
Oct 08 16:28:47 compute-0 nova_compute[117413]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 08 16:28:47 compute-0 nova_compute[117413]:         </nova:extraSpecs>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       </nova:flavor>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <nova:image uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43">
Oct 08 16:28:47 compute-0 nova_compute[117413]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 08 16:28:47 compute-0 nova_compute[117413]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 08 16:28:47 compute-0 nova_compute[117413]:         <nova:minDisk>1</nova:minDisk>
Oct 08 16:28:47 compute-0 nova_compute[117413]:         <nova:minRam>0</nova:minRam>
Oct 08 16:28:47 compute-0 nova_compute[117413]:         <nova:properties>
Oct 08 16:28:47 compute-0 nova_compute[117413]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 08 16:28:47 compute-0 nova_compute[117413]:         </nova:properties>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       </nova:image>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <nova:owner>
Oct 08 16:28:47 compute-0 nova_compute[117413]:         <nova:user uuid="a35a495eee564e31a6dce3a5c601665c">tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320-project-admin</nova:user>
Oct 08 16:28:47 compute-0 nova_compute[117413]:         <nova:project uuid="621f620ded214ac792354cb32ce3de49">tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320</nova:project>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       </nova:owner>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <nova:root type="image" uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <nova:ports>
Oct 08 16:28:47 compute-0 nova_compute[117413]:         <nova:port uuid="385bee6e-74ee-4c23-9048-bca81208f18b">
Oct 08 16:28:47 compute-0 nova_compute[117413]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:         </nova:port>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       </nova:ports>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     </nova:instance>
Oct 08 16:28:47 compute-0 nova_compute[117413]:   </metadata>
Oct 08 16:28:47 compute-0 nova_compute[117413]:   <sysinfo type="smbios">
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <system>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <entry name="manufacturer">RDO</entry>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <entry name="product">OpenStack Compute</entry>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <entry name="version">32.1.0-0.20251008114656.23cad1d.el10</entry>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <entry name="serial">bff67805-35b2-4e36-9e58-3785c94133e4</entry>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <entry name="uuid">bff67805-35b2-4e36-9e58-3785c94133e4</entry>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <entry name="family">Virtual Machine</entry>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     </system>
Oct 08 16:28:47 compute-0 nova_compute[117413]:   </sysinfo>
Oct 08 16:28:47 compute-0 nova_compute[117413]:   <os>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <boot dev="hd"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <smbios mode="sysinfo"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:   </os>
Oct 08 16:28:47 compute-0 nova_compute[117413]:   <features>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <acpi/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <apic/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <vmcoreinfo/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:   </features>
Oct 08 16:28:47 compute-0 nova_compute[117413]:   <clock offset="utc">
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <timer name="hpet" present="no"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:   </clock>
Oct 08 16:28:47 compute-0 nova_compute[117413]:   <cpu mode="host-model" match="exact">
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <topology sockets="1" cores="1" threads="1"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:28:47 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <disk type="file" device="disk">
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4/disk"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <target dev="vda" bus="virtio"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <disk type="file" device="cdrom">
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <driver name="qemu" type="raw" cache="none"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4/disk.config"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <target dev="sda" bus="sata"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <interface type="ethernet">
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <mac address="fa:16:3e:8f:8f:2d"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <driver name="vhost" rx_queue_size="512"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <mtu size="1442"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <target dev="tap385bee6e-74"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     </interface>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <serial type="pty">
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4/console.log" append="off"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     </serial>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <video>
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     </video>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <input type="tablet" bus="usb"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <rng model="virtio">
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <backend model="random">/dev/urandom</backend>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <controller type="usb" index="0"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 08 16:28:47 compute-0 nova_compute[117413]:       <stats period="10"/>
Oct 08 16:28:47 compute-0 nova_compute[117413]:     </memballoon>
Oct 08 16:28:47 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:28:47 compute-0 nova_compute[117413]: </domain>
Oct 08 16:28:47 compute-0 nova_compute[117413]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.612 2 DEBUG nova.compute.manager [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Preparing to wait for external event network-vif-plugged-385bee6e-74ee-4c23-9048-bca81208f18b prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.612 2 DEBUG oslo_concurrency.lockutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "bff67805-35b2-4e36-9e58-3785c94133e4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.612 2 DEBUG oslo_concurrency.lockutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "bff67805-35b2-4e36-9e58-3785c94133e4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.613 2 DEBUG oslo_concurrency.lockutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "bff67805-35b2-4e36-9e58-3785c94133e4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.613 2 DEBUG nova.virt.libvirt.vif [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:28:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-912600583',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutenoderesourceconsolidationstrategy-server-912',id=17,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='621f620ded214ac792354cb32ce3de49',ramdisk_id='',reservation_id='r-2ncfz88w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320',owner_user_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:28:40Z,user_data=None,user_id='a35a495eee564e31a6dce3a5c601665c',uuid=bff67805-35b2-4e36-9e58-3785c94133e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "385bee6e-74ee-4c23-9048-bca81208f18b", "address": "fa:16:3e:8f:8f:2d", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap385bee6e-74", "ovs_interfaceid": "385bee6e-74ee-4c23-9048-bca81208f18b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.613 2 DEBUG nova.network.os_vif_util [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Converting VIF {"id": "385bee6e-74ee-4c23-9048-bca81208f18b", "address": "fa:16:3e:8f:8f:2d", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap385bee6e-74", "ovs_interfaceid": "385bee6e-74ee-4c23-9048-bca81208f18b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.614 2 DEBUG nova.network.os_vif_util [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:8f:2d,bridge_name='br-int',has_traffic_filtering=True,id=385bee6e-74ee-4c23-9048-bca81208f18b,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap385bee6e-74') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.614 2 DEBUG os_vif [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:8f:2d,bridge_name='br-int',has_traffic_filtering=True,id=385bee6e-74ee-4c23-9048-bca81208f18b,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap385bee6e-74') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.615 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.615 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'ae258525-d652-5f56-82de-a659de686584', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.621 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap385bee6e-74, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.621 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap385bee6e-74, col_values=(('qos', UUID('338709d2-33f1-4732-9420-176321203fbf')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.621 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap385bee6e-74, col_values=(('external_ids', {'iface-id': '385bee6e-74ee-4c23-9048-bca81208f18b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:8f:2d', 'vm-uuid': 'bff67805-35b2-4e36-9e58-3785c94133e4'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:47 compute-0 NetworkManager[1034]: <info>  [1759940927.6238] manager: (tap385bee6e-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:47 compute-0 nova_compute[117413]: 2025-10-08 16:28:47.628 2 INFO os_vif [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:8f:2d,bridge_name='br-int',has_traffic_filtering=True,id=385bee6e-74ee-4c23-9048-bca81208f18b,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap385bee6e-74')
Oct 08 16:28:49 compute-0 nova_compute[117413]: 2025-10-08 16:28:49.166 2 DEBUG nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:28:49 compute-0 nova_compute[117413]: 2025-10-08 16:28:49.167 2 DEBUG nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:28:49 compute-0 nova_compute[117413]: 2025-10-08 16:28:49.168 2 DEBUG nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] No VIF found with MAC fa:16:3e:8f:8f:2d, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 08 16:28:49 compute-0 nova_compute[117413]: 2025-10-08 16:28:49.168 2 INFO nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Using config drive
Oct 08 16:28:49 compute-0 podman[147254]: 2025-10-08 16:28:49.433605354 +0000 UTC m=+0.044590474 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, distribution-scope=public, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 08 16:28:49 compute-0 nova_compute[117413]: 2025-10-08 16:28:49.679 2 WARNING neutronclient.v2_0.client [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:28:50 compute-0 nova_compute[117413]: 2025-10-08 16:28:50.074 2 INFO nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Creating config drive at /var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4/disk.config
Oct 08 16:28:50 compute-0 nova_compute[117413]: 2025-10-08 16:28:50.079 2 DEBUG oslo_concurrency.processutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmprebzib00 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:28:50 compute-0 nova_compute[117413]: 2025-10-08 16:28:50.204 2 DEBUG oslo_concurrency.processutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmprebzib00" returned: 0 in 0.125s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:28:50 compute-0 kernel: tap385bee6e-74: entered promiscuous mode
Oct 08 16:28:50 compute-0 NetworkManager[1034]: <info>  [1759940930.2553] manager: (tap385bee6e-74): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Oct 08 16:28:50 compute-0 ovn_controller[19768]: 2025-10-08T16:28:50Z|00135|binding|INFO|Claiming lport 385bee6e-74ee-4c23-9048-bca81208f18b for this chassis.
Oct 08 16:28:50 compute-0 nova_compute[117413]: 2025-10-08 16:28:50.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:50 compute-0 ovn_controller[19768]: 2025-10-08T16:28:50Z|00136|binding|INFO|385bee6e-74ee-4c23-9048-bca81208f18b: Claiming fa:16:3e:8f:8f:2d 10.100.0.11
Oct 08 16:28:50 compute-0 nova_compute[117413]: 2025-10-08 16:28:50.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.277 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:8f:2d 10.100.0.11'], port_security=['fa:16:3e:8f:8f:2d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'bff67805-35b2-4e36-9e58-3785c94133e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '621f620ded214ac792354cb32ce3de49', 'neutron:revision_number': '4', 'neutron:security_group_ids': '44bcc6c4-216d-4445-a813-dacde8875d66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a57d6524-8805-463f-b41a-d3218c332981, chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=385bee6e-74ee-4c23-9048-bca81208f18b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.278 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 385bee6e-74ee-4c23-9048-bca81208f18b in datapath eaa04398-576e-4a18-a2fe-a6a0b2d52eea bound to our chassis
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.279 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eaa04398-576e-4a18-a2fe-a6a0b2d52eea
Oct 08 16:28:50 compute-0 systemd-udevd[147293]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.291 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[fae80a80-b51b-4a02-bc0b-7360f153d81d]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.292 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeaa04398-51 in ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.295 139805 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeaa04398-50 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.295 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[0b310b0e-be43-4ab1-a2b0-c73fbc95a0e0]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.296 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa5ab97-014f-4d8b-83a2-3e326a0297a6]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:28:50 compute-0 NetworkManager[1034]: <info>  [1759940930.3037] device (tap385bee6e-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:28:50 compute-0 NetworkManager[1034]: <info>  [1759940930.3050] device (tap385bee6e-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.306 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[fb1e0b24-41f8-4728-b2ab-5545f894666b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:28:50 compute-0 systemd-machined[77548]: New machine qemu-12-instance-00000011.
Oct 08 16:28:50 compute-0 nova_compute[117413]: 2025-10-08 16:28:50.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:50 compute-0 ovn_controller[19768]: 2025-10-08T16:28:50Z|00137|binding|INFO|Setting lport 385bee6e-74ee-4c23-9048-bca81208f18b ovn-installed in OVS
Oct 08 16:28:50 compute-0 ovn_controller[19768]: 2025-10-08T16:28:50Z|00138|binding|INFO|Setting lport 385bee6e-74ee-4c23-9048-bca81208f18b up in Southbound
Oct 08 16:28:50 compute-0 nova_compute[117413]: 2025-10-08 16:28:50.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.326 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a452b4-e816-45d5-8f3a-62675cfb0256]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:28:50 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-00000011.
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.362 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[358c2c7d-476f-4eb6-bc73-4f6ddc8fed70]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.366 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c496dd-43ef-4417-a427-a1f59a367132]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:28:50 compute-0 NetworkManager[1034]: <info>  [1759940930.3677] manager: (tapeaa04398-50): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.402 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[cb0ff3e7-a654-457c-9bb6-64849795dcbe]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.405 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[dc410e9b-f3d2-4536-9537-a6e14b8f4bc6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:28:50 compute-0 NetworkManager[1034]: <info>  [1759940930.4297] device (tapeaa04398-50): carrier: link connected
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.436 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e3494b-d819-41f0-8575-0a7a3033641d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.452 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d3bd87-5719-4b52-898c-8399dc2b09cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeaa04398-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:0e:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 221912, 'reachable_time': 35480, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 147326, 'error': None, 'target': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.470 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[d471e511-b264-4065-8fc8-b108e65da294]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecb:e1e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 221912, 'tstamp': 221912}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 147327, 'error': None, 'target': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.487 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0ee50e-5bf7-4ec0-9fd5-80f49a409beb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeaa04398-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:0e:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 221912, 'reachable_time': 35480, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 147329, 'error': None, 'target': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.523 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[ba8c6bc9-97be-42fa-8c2a-745993f3984d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.591 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[03daee55-962b-434d-a5e7-1dbb6f1c8d9a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.593 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeaa04398-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.593 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.594 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeaa04398-50, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:28:50 compute-0 nova_compute[117413]: 2025-10-08 16:28:50.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:50 compute-0 kernel: tapeaa04398-50: entered promiscuous mode
Oct 08 16:28:50 compute-0 NetworkManager[1034]: <info>  [1759940930.5985] manager: (tapeaa04398-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Oct 08 16:28:50 compute-0 nova_compute[117413]: 2025-10-08 16:28:50.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.600 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeaa04398-50, col_values=(('external_ids', {'iface-id': '4ea2ba2c-a72c-4801-bc39-9e3f03f24d1c'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:28:50 compute-0 ovn_controller[19768]: 2025-10-08T16:28:50Z|00139|binding|INFO|Releasing lport 4ea2ba2c-a72c-4801-bc39-9e3f03f24d1c from this chassis (sb_readonly=0)
Oct 08 16:28:50 compute-0 nova_compute[117413]: 2025-10-08 16:28:50.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:50 compute-0 nova_compute[117413]: 2025-10-08 16:28:50.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.622 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b04313-cd5e-49c3-85e4-b363cddb727e]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.623 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.623 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.623 28633 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for eaa04398-576e-4a18-a2fe-a6a0b2d52eea disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.623 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.624 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[fa7477c6-3fbf-4815-8e98-0905b660a156]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.624 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.625 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[cb61f04e-19e8-4a96-8a72-b0466d7041e5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.625 28633 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: global
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     log         /dev/log local0 debug
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     log-tag     haproxy-metadata-proxy-eaa04398-576e-4a18-a2fe-a6a0b2d52eea
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     user        root
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     group       root
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     maxconn     1024
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     pidfile     /var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     daemon
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: defaults
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     log global
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     mode http
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     option httplog
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     option dontlognull
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     option http-server-close
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     option forwardfor
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     retries                 3
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     timeout http-request    30s
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     timeout connect         30s
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     timeout client          32s
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     timeout server          32s
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     timeout http-keep-alive 30s
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: listen listener
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     bind 169.254.169.254:80
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     server metadata /var/lib/neutron/metadata_proxy
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:     http-request add-header X-OVN-Network-ID eaa04398-576e-4a18-a2fe-a6a0b2d52eea
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 08 16:28:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:28:50.626 28633 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'env', 'PROCESS_TAG=haproxy-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 08 16:28:50 compute-0 nova_compute[117413]: 2025-10-08 16:28:50.958 2 DEBUG nova.compute.manager [req-b05f8c75-dc89-4448-8616-49b77f87b8e0 req-5383eafb-948e-4a0a-9a4a-8f0fe584c6ab c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Received event network-vif-plugged-385bee6e-74ee-4c23-9048-bca81208f18b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:28:50 compute-0 nova_compute[117413]: 2025-10-08 16:28:50.959 2 DEBUG oslo_concurrency.lockutils [req-b05f8c75-dc89-4448-8616-49b77f87b8e0 req-5383eafb-948e-4a0a-9a4a-8f0fe584c6ab c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "bff67805-35b2-4e36-9e58-3785c94133e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:28:50 compute-0 nova_compute[117413]: 2025-10-08 16:28:50.959 2 DEBUG oslo_concurrency.lockutils [req-b05f8c75-dc89-4448-8616-49b77f87b8e0 req-5383eafb-948e-4a0a-9a4a-8f0fe584c6ab c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "bff67805-35b2-4e36-9e58-3785c94133e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:28:50 compute-0 nova_compute[117413]: 2025-10-08 16:28:50.959 2 DEBUG oslo_concurrency.lockutils [req-b05f8c75-dc89-4448-8616-49b77f87b8e0 req-5383eafb-948e-4a0a-9a4a-8f0fe584c6ab c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "bff67805-35b2-4e36-9e58-3785c94133e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:28:50 compute-0 nova_compute[117413]: 2025-10-08 16:28:50.959 2 DEBUG nova.compute.manager [req-b05f8c75-dc89-4448-8616-49b77f87b8e0 req-5383eafb-948e-4a0a-9a4a-8f0fe584c6ab c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Processing event network-vif-plugged-385bee6e-74ee-4c23-9048-bca81208f18b _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 08 16:28:50 compute-0 nova_compute[117413]: 2025-10-08 16:28:50.960 2 DEBUG nova.compute.manager [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 08 16:28:50 compute-0 nova_compute[117413]: 2025-10-08 16:28:50.964 2 DEBUG nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 08 16:28:50 compute-0 nova_compute[117413]: 2025-10-08 16:28:50.967 2 INFO nova.virt.libvirt.driver [-] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Instance spawned successfully.
Oct 08 16:28:50 compute-0 nova_compute[117413]: 2025-10-08 16:28:50.967 2 DEBUG nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 08 16:28:51 compute-0 podman[147367]: 2025-10-08 16:28:51.010323995 +0000 UTC m=+0.054848890 container create 5bb9e46c4ca0c036551668c0b4e667fab3d75c28f60bd1b31e669870a380825e (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:28:51 compute-0 systemd[1]: Started libpod-conmon-5bb9e46c4ca0c036551668c0b4e667fab3d75c28f60bd1b31e669870a380825e.scope.
Oct 08 16:28:51 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:28:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdea16b06e802354792209a7595b1265593301b2c0a34411b2fb4d9554823bcd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 08 16:28:51 compute-0 podman[147367]: 2025-10-08 16:28:50.982978178 +0000 UTC m=+0.027503103 image pull 1b705be0a2473f9551d4f3571c1e8fc1b0bd84e013684239de53078e70a4b6e3 38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 08 16:28:51 compute-0 podman[147367]: 2025-10-08 16:28:51.088442593 +0000 UTC m=+0.132967508 container init 5bb9e46c4ca0c036551668c0b4e667fab3d75c28f60bd1b31e669870a380825e (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 16:28:51 compute-0 podman[147367]: 2025-10-08 16:28:51.093773937 +0000 UTC m=+0.138298832 container start 5bb9e46c4ca0c036551668c0b4e667fab3d75c28f60bd1b31e669870a380825e (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 08 16:28:51 compute-0 neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea[147382]: [NOTICE]   (147386) : New worker (147388) forked
Oct 08 16:28:51 compute-0 neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea[147382]: [NOTICE]   (147386) : Loading success.
Oct 08 16:28:51 compute-0 nova_compute[117413]: 2025-10-08 16:28:51.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:51 compute-0 nova_compute[117413]: 2025-10-08 16:28:51.479 2 DEBUG nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:28:51 compute-0 nova_compute[117413]: 2025-10-08 16:28:51.480 2 DEBUG nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:28:51 compute-0 nova_compute[117413]: 2025-10-08 16:28:51.481 2 DEBUG nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:28:51 compute-0 nova_compute[117413]: 2025-10-08 16:28:51.481 2 DEBUG nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:28:51 compute-0 nova_compute[117413]: 2025-10-08 16:28:51.482 2 DEBUG nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:28:51 compute-0 nova_compute[117413]: 2025-10-08 16:28:51.482 2 DEBUG nova.virt.libvirt.driver [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:28:51 compute-0 nova_compute[117413]: 2025-10-08 16:28:51.991 2 INFO nova.compute.manager [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Took 10.49 seconds to spawn the instance on the hypervisor.
Oct 08 16:28:51 compute-0 nova_compute[117413]: 2025-10-08 16:28:51.992 2 DEBUG nova.compute.manager [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:28:52 compute-0 nova_compute[117413]: 2025-10-08 16:28:52.517 2 INFO nova.compute.manager [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Took 15.69 seconds to build instance.
Oct 08 16:28:52 compute-0 nova_compute[117413]: 2025-10-08 16:28:52.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:53 compute-0 nova_compute[117413]: 2025-10-08 16:28:53.018 2 DEBUG nova.compute.manager [req-84f7e72a-b4bc-4bf6-9196-c1d45473a6e5 req-41252b76-8fdb-4273-8289-af011d8b6a6e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Received event network-vif-plugged-385bee6e-74ee-4c23-9048-bca81208f18b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:28:53 compute-0 nova_compute[117413]: 2025-10-08 16:28:53.019 2 DEBUG oslo_concurrency.lockutils [req-84f7e72a-b4bc-4bf6-9196-c1d45473a6e5 req-41252b76-8fdb-4273-8289-af011d8b6a6e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "bff67805-35b2-4e36-9e58-3785c94133e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:28:53 compute-0 nova_compute[117413]: 2025-10-08 16:28:53.019 2 DEBUG oslo_concurrency.lockutils [req-84f7e72a-b4bc-4bf6-9196-c1d45473a6e5 req-41252b76-8fdb-4273-8289-af011d8b6a6e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "bff67805-35b2-4e36-9e58-3785c94133e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:28:53 compute-0 nova_compute[117413]: 2025-10-08 16:28:53.019 2 DEBUG oslo_concurrency.lockutils [req-84f7e72a-b4bc-4bf6-9196-c1d45473a6e5 req-41252b76-8fdb-4273-8289-af011d8b6a6e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "bff67805-35b2-4e36-9e58-3785c94133e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:28:53 compute-0 nova_compute[117413]: 2025-10-08 16:28:53.019 2 DEBUG nova.compute.manager [req-84f7e72a-b4bc-4bf6-9196-c1d45473a6e5 req-41252b76-8fdb-4273-8289-af011d8b6a6e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] No waiting events found dispatching network-vif-plugged-385bee6e-74ee-4c23-9048-bca81208f18b pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:28:53 compute-0 nova_compute[117413]: 2025-10-08 16:28:53.019 2 WARNING nova.compute.manager [req-84f7e72a-b4bc-4bf6-9196-c1d45473a6e5 req-41252b76-8fdb-4273-8289-af011d8b6a6e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Received unexpected event network-vif-plugged-385bee6e-74ee-4c23-9048-bca81208f18b for instance with vm_state active and task_state None.
Oct 08 16:28:53 compute-0 nova_compute[117413]: 2025-10-08 16:28:53.021 2 DEBUG oslo_concurrency.lockutils [None req-8f6ccbb5-e48e-4701-9e1d-6f08aa45dbca a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "bff67805-35b2-4e36-9e58-3785c94133e4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.216s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:28:55 compute-0 podman[147397]: 2025-10-08 16:28:55.443003555 +0000 UTC m=+0.051365789 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, container_name=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 08 16:28:56 compute-0 nova_compute[117413]: 2025-10-08 16:28:56.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:57 compute-0 nova_compute[117413]: 2025-10-08 16:28:57.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:28:58 compute-0 podman[147417]: 2025-10-08 16:28:58.443911016 +0000 UTC m=+0.051102172 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Oct 08 16:28:59 compute-0 podman[127881]: time="2025-10-08T16:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:28:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:28:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3486 "" "Go-http-client/1.1"
Oct 08 16:29:01 compute-0 nova_compute[117413]: 2025-10-08 16:29:01.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:01 compute-0 openstack_network_exporter[130039]: ERROR   16:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:29:01 compute-0 openstack_network_exporter[130039]: ERROR   16:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:29:01 compute-0 openstack_network_exporter[130039]: ERROR   16:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:29:01 compute-0 openstack_network_exporter[130039]: ERROR   16:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:29:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:29:01 compute-0 openstack_network_exporter[130039]: ERROR   16:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:29:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:29:02 compute-0 nova_compute[117413]: 2025-10-08 16:29:02.594 2 DEBUG nova.virt.libvirt.driver [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Creating tmpfile /var/lib/nova/instances/tmp7oedqra3 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10944
Oct 08 16:29:02 compute-0 nova_compute[117413]: 2025-10-08 16:29:02.596 2 WARNING neutronclient.v2_0.client [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:29:02 compute-0 nova_compute[117413]: 2025-10-08 16:29:02.607 2 DEBUG nova.compute.manager [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp7oedqra3',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.12/site-packages/nova/compute/manager.py:9086
Oct 08 16:29:02 compute-0 nova_compute[117413]: 2025-10-08 16:29:02.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:02 compute-0 nova_compute[117413]: 2025-10-08 16:29:02.866 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:29:03 compute-0 ovn_controller[19768]: 2025-10-08T16:29:03Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8f:8f:2d 10.100.0.11
Oct 08 16:29:03 compute-0 ovn_controller[19768]: 2025-10-08T16:29:03Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8f:8f:2d 10.100.0.11
Oct 08 16:29:04 compute-0 nova_compute[117413]: 2025-10-08 16:29:04.647 2 WARNING neutronclient.v2_0.client [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:29:05 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 08 16:29:05 compute-0 podman[147449]: 2025-10-08 16:29:05.174794809 +0000 UTC m=+0.067113702 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:29:05 compute-0 podman[147450]: 2025-10-08 16:29:05.197562355 +0000 UTC m=+0.086525222 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 08 16:29:06 compute-0 nova_compute[117413]: 2025-10-08 16:29:06.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:06 compute-0 nova_compute[117413]: 2025-10-08 16:29:06.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:29:07 compute-0 nova_compute[117413]: 2025-10-08 16:29:07.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:29:07 compute-0 nova_compute[117413]: 2025-10-08 16:29:07.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:29:07 compute-0 nova_compute[117413]: 2025-10-08 16:29:07.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:29:07 compute-0 nova_compute[117413]: 2025-10-08 16:29:07.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:29:07 compute-0 nova_compute[117413]: 2025-10-08 16:29:07.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:29:07 compute-0 nova_compute[117413]: 2025-10-08 16:29:07.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:29:07 compute-0 nova_compute[117413]: 2025-10-08 16:29:07.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:07 compute-0 nova_compute[117413]: 2025-10-08 16:29:07.872 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:29:07 compute-0 nova_compute[117413]: 2025-10-08 16:29:07.873 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:29:07 compute-0 nova_compute[117413]: 2025-10-08 16:29:07.873 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:29:07 compute-0 nova_compute[117413]: 2025-10-08 16:29:07.873 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:29:08 compute-0 nova_compute[117413]: 2025-10-08 16:29:08.908 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:29:08 compute-0 nova_compute[117413]: 2025-10-08 16:29:08.943 2 DEBUG nova.compute.manager [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp7oedqra3',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='486c5fd5-76da-48a7-9a11-e404ccb4cfba',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9311
Oct 08 16:29:08 compute-0 nova_compute[117413]: 2025-10-08 16:29:08.983 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:29:08 compute-0 nova_compute[117413]: 2025-10-08 16:29:08.983 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:29:09 compute-0 nova_compute[117413]: 2025-10-08 16:29:09.075 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:29:09 compute-0 nova_compute[117413]: 2025-10-08 16:29:09.249 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:29:09 compute-0 nova_compute[117413]: 2025-10-08 16:29:09.250 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:29:09 compute-0 nova_compute[117413]: 2025-10-08 16:29:09.293 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.042s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:29:09 compute-0 nova_compute[117413]: 2025-10-08 16:29:09.293 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5964MB free_disk=73.22172164916992GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:29:09 compute-0 nova_compute[117413]: 2025-10-08 16:29:09.294 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:29:09 compute-0 nova_compute[117413]: 2025-10-08 16:29:09.294 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:29:09 compute-0 nova_compute[117413]: 2025-10-08 16:29:09.961 2 DEBUG oslo_concurrency.lockutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-486c5fd5-76da-48a7-9a11-e404ccb4cfba" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:29:09 compute-0 nova_compute[117413]: 2025-10-08 16:29:09.961 2 DEBUG oslo_concurrency.lockutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-486c5fd5-76da-48a7-9a11-e404ccb4cfba" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:29:09 compute-0 nova_compute[117413]: 2025-10-08 16:29:09.962 2 DEBUG nova.network.neutron [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:29:10 compute-0 nova_compute[117413]: 2025-10-08 16:29:10.315 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Migration for instance 486c5fd5-76da-48a7-9a11-e404ccb4cfba refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 08 16:29:10 compute-0 nova_compute[117413]: 2025-10-08 16:29:10.469 2 WARNING neutronclient.v2_0.client [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:29:10 compute-0 nova_compute[117413]: 2025-10-08 16:29:10.823 2 INFO nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Updating resource usage from migration e76ed34f-7157-4cc0-ba23-f2cfc5e59bde
Oct 08 16:29:10 compute-0 nova_compute[117413]: 2025-10-08 16:29:10.824 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Starting to track incoming migration e76ed34f-7157-4cc0-ba23-f2cfc5e59bde with flavor 43cd5d45-bd07-4889-a671-dd23291090c1 _update_usage_from_migration /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1536
Oct 08 16:29:11 compute-0 nova_compute[117413]: 2025-10-08 16:29:11.189 2 WARNING neutronclient.v2_0.client [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:29:11 compute-0 nova_compute[117413]: 2025-10-08 16:29:11.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:11 compute-0 nova_compute[117413]: 2025-10-08 16:29:11.340 2 DEBUG nova.network.neutron [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Updating instance_info_cache with network_info: [{"id": "d4440888-e536-4b06-ba7d-515f11bd5f93", "address": "fa:16:3e:31:3a:8e", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4440888-e5", "ovs_interfaceid": "d4440888-e536-4b06-ba7d-515f11bd5f93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:29:11 compute-0 nova_compute[117413]: 2025-10-08 16:29:11.349 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance bff67805-35b2-4e36-9e58-3785c94133e4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:29:11 compute-0 nova_compute[117413]: 2025-10-08 16:29:11.845 2 DEBUG oslo_concurrency.lockutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-486c5fd5-76da-48a7-9a11-e404ccb4cfba" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:29:11 compute-0 nova_compute[117413]: 2025-10-08 16:29:11.855 2 WARNING nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance 486c5fd5-76da-48a7-9a11-e404ccb4cfba has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.
Oct 08 16:29:11 compute-0 nova_compute[117413]: 2025-10-08 16:29:11.855 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:29:11 compute-0 nova_compute[117413]: 2025-10-08 16:29:11.855 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:29:09 up 37 min,  0 user,  load average: 0.45, 0.27, 0.27\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_621f620ded214ac792354cb32ce3de49': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:29:11 compute-0 nova_compute[117413]: 2025-10-08 16:29:11.860 2 DEBUG nova.virt.libvirt.driver [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp7oedqra3',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='486c5fd5-76da-48a7-9a11-e404ccb4cfba',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11737
Oct 08 16:29:11 compute-0 nova_compute[117413]: 2025-10-08 16:29:11.860 2 DEBUG nova.virt.libvirt.driver [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Creating instance directory: /var/lib/nova/instances/486c5fd5-76da-48a7-9a11-e404ccb4cfba pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11750
Oct 08 16:29:11 compute-0 nova_compute[117413]: 2025-10-08 16:29:11.861 2 DEBUG nova.virt.libvirt.driver [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Creating disk.info with the contents: {'/var/lib/nova/instances/486c5fd5-76da-48a7-9a11-e404ccb4cfba/disk': 'qcow2', '/var/lib/nova/instances/486c5fd5-76da-48a7-9a11-e404ccb4cfba/disk.config': 'raw'} pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11764
Oct 08 16:29:11 compute-0 nova_compute[117413]: 2025-10-08 16:29:11.861 2 DEBUG nova.virt.libvirt.driver [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11774
Oct 08 16:29:11 compute-0 nova_compute[117413]: 2025-10-08 16:29:11.862 2 DEBUG nova.objects.instance [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 486c5fd5-76da-48a7-9a11-e404ccb4cfba obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:29:11 compute-0 nova_compute[117413]: 2025-10-08 16:29:11.910 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.369 2 DEBUG oslo_utils.imageutils.format_inspector [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.372 2 DEBUG oslo_utils.imageutils.format_inspector [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.374 2 DEBUG oslo_concurrency.processutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.418 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.439 2 DEBUG oslo_concurrency.processutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.440 2 DEBUG oslo_concurrency.lockutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.441 2 DEBUG oslo_concurrency.lockutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.441 2 DEBUG oslo_utils.imageutils.format_inspector [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.444 2 DEBUG oslo_utils.imageutils.format_inspector [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.445 2 DEBUG oslo_concurrency.processutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.501 2 DEBUG oslo_concurrency.processutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.502 2 DEBUG oslo_concurrency.processutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/486c5fd5-76da-48a7-9a11-e404ccb4cfba/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.537 2 DEBUG oslo_concurrency.processutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/486c5fd5-76da-48a7-9a11-e404ccb4cfba/disk 1073741824" returned: 0 in 0.036s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.538 2 DEBUG oslo_concurrency.lockutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.097s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.539 2 DEBUG oslo_concurrency.processutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.591 2 DEBUG oslo_concurrency.processutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.592 2 DEBUG nova.virt.disk.api [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Checking if we can resize image /var/lib/nova/instances/486c5fd5-76da-48a7-9a11-e404ccb4cfba/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.592 2 DEBUG oslo_concurrency.processutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/486c5fd5-76da-48a7-9a11-e404ccb4cfba/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.648 2 DEBUG oslo_concurrency.processutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/486c5fd5-76da-48a7-9a11-e404ccb4cfba/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.649 2 DEBUG nova.virt.disk.api [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Cannot resize image /var/lib/nova/instances/486c5fd5-76da-48a7-9a11-e404ccb4cfba/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.649 2 DEBUG nova.objects.instance [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'migration_context' on Instance uuid 486c5fd5-76da-48a7-9a11-e404ccb4cfba obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.928 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.929 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.635s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.929 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:29:12 compute-0 nova_compute[117413]: 2025-10-08 16:29:12.929 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.156 2 DEBUG nova.objects.base [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Object Instance<486c5fd5-76da-48a7-9a11-e404ccb4cfba> lazy-loaded attributes: trusted_certs,migration_context wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.157 2 DEBUG oslo_concurrency.processutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/486c5fd5-76da-48a7-9a11-e404ccb4cfba/disk.config 497664 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.179 2 DEBUG oslo_concurrency.processutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/486c5fd5-76da-48a7-9a11-e404ccb4cfba/disk.config 497664" returned: 0 in 0.023s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.180 2 DEBUG nova.virt.libvirt.driver [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11704
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.182 2 DEBUG nova.virt.libvirt.vif [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-08T16:28:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-1479976031',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutenoderesourceconsolidationstrategy-server-147',id=16,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:28:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='621f620ded214ac792354cb32ce3de49',ramdisk_id='',reservation_id='r-4i7zhug5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320',owner_user_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:28:28Z,user_data=None,user_id='a35a495eee564e31a6dce3a5c601665c',uuid=486c5fd5-76da-48a7-9a11-e404ccb4cfba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d4440888-e536-4b06-ba7d-515f11bd5f93", "address": "fa:16:3e:31:3a:8e", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapd4440888-e5", "ovs_interfaceid": "d4440888-e536-4b06-ba7d-515f11bd5f93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.182 2 DEBUG nova.network.os_vif_util [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converting VIF {"id": "d4440888-e536-4b06-ba7d-515f11bd5f93", "address": "fa:16:3e:31:3a:8e", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapd4440888-e5", "ovs_interfaceid": "d4440888-e536-4b06-ba7d-515f11bd5f93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.183 2 DEBUG nova.network.os_vif_util [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:31:3a:8e,bridge_name='br-int',has_traffic_filtering=True,id=d4440888-e536-4b06-ba7d-515f11bd5f93,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4440888-e5') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.183 2 DEBUG os_vif [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:3a:8e,bridge_name='br-int',has_traffic_filtering=True,id=d4440888-e536-4b06-ba7d-515f11bd5f93,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4440888-e5') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.184 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.184 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.185 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'fb3c652d-7622-54c8-84be-eb9821c331f8', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.191 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4440888-e5, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.192 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapd4440888-e5, col_values=(('qos', UUID('491a690d-1f7a-4496-92af-831efe43be2b')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.192 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapd4440888-e5, col_values=(('external_ids', {'iface-id': 'd4440888-e536-4b06-ba7d-515f11bd5f93', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:31:3a:8e', 'vm-uuid': '486c5fd5-76da-48a7-9a11-e404ccb4cfba'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:13 compute-0 NetworkManager[1034]: <info>  [1759940953.1943] manager: (tapd4440888-e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.204 2 INFO os_vif [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:31:3a:8e,bridge_name='br-int',has_traffic_filtering=True,id=d4440888-e536-4b06-ba7d-515f11bd5f93,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4440888-e5')
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.204 2 DEBUG nova.virt.libvirt.driver [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11851
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.205 2 DEBUG nova.compute.manager [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp7oedqra3',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='486c5fd5-76da-48a7-9a11-e404ccb4cfba',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9377
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.206 2 WARNING neutronclient.v2_0.client [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:29:13 compute-0 nova_compute[117413]: 2025-10-08 16:29:13.992 2 WARNING neutronclient.v2_0.client [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:29:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:15.007 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:29:15 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:15.007 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:29:15 compute-0 nova_compute[117413]: 2025-10-08 16:29:15.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:15 compute-0 podman[147528]: 2025-10-08 16:29:15.441480062 +0000 UTC m=+0.053442389 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Oct 08 16:29:15 compute-0 nova_compute[117413]: 2025-10-08 16:29:15.869 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:29:15 compute-0 nova_compute[117413]: 2025-10-08 16:29:15.870 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:29:15 compute-0 nova_compute[117413]: 2025-10-08 16:29:15.870 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Oct 08 16:29:16 compute-0 nova_compute[117413]: 2025-10-08 16:29:16.094 2 DEBUG nova.network.neutron [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Port d4440888-e536-4b06-ba7d-515f11bd5f93 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.12/site-packages/nova/network/neutron.py:356
Oct 08 16:29:16 compute-0 nova_compute[117413]: 2025-10-08 16:29:16.105 2 DEBUG nova.compute.manager [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp7oedqra3',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='486c5fd5-76da-48a7-9a11-e404ccb4cfba',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9443
Oct 08 16:29:16 compute-0 nova_compute[117413]: 2025-10-08 16:29:16.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:16 compute-0 nova_compute[117413]: 2025-10-08 16:29:16.378 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Oct 08 16:29:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:18.008 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:29:18 compute-0 nova_compute[117413]: 2025-10-08 16:29:18.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:18 compute-0 nova_compute[117413]: 2025-10-08 16:29:18.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:29:18 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 08 16:29:18 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 08 16:29:18 compute-0 kernel: tapd4440888-e5: entered promiscuous mode
Oct 08 16:29:18 compute-0 NetworkManager[1034]: <info>  [1759940958.6522] manager: (tapd4440888-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Oct 08 16:29:18 compute-0 ovn_controller[19768]: 2025-10-08T16:29:18Z|00140|binding|INFO|Claiming lport d4440888-e536-4b06-ba7d-515f11bd5f93 for this additional chassis.
Oct 08 16:29:18 compute-0 ovn_controller[19768]: 2025-10-08T16:29:18Z|00141|binding|INFO|d4440888-e536-4b06-ba7d-515f11bd5f93: Claiming fa:16:3e:31:3a:8e 10.100.0.4
Oct 08 16:29:18 compute-0 nova_compute[117413]: 2025-10-08 16:29:18.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:18.661 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:3a:8e 10.100.0.4'], port_security=['fa:16:3e:31:3a:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '486c5fd5-76da-48a7-9a11-e404ccb4cfba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '621f620ded214ac792354cb32ce3de49', 'neutron:revision_number': '10', 'neutron:security_group_ids': '44bcc6c4-216d-4445-a813-dacde8875d66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a57d6524-8805-463f-b41a-d3218c332981, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=d4440888-e536-4b06-ba7d-515f11bd5f93) old=Port_Binding(additional_chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:29:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:18.662 28633 INFO neutron.agent.ovn.metadata.agent [-] Port d4440888-e536-4b06-ba7d-515f11bd5f93 in datapath eaa04398-576e-4a18-a2fe-a6a0b2d52eea unbound from our chassis
Oct 08 16:29:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:18.663 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eaa04398-576e-4a18-a2fe-a6a0b2d52eea
Oct 08 16:29:18 compute-0 ovn_controller[19768]: 2025-10-08T16:29:18Z|00142|binding|INFO|Setting lport d4440888-e536-4b06-ba7d-515f11bd5f93 ovn-installed in OVS
Oct 08 16:29:18 compute-0 nova_compute[117413]: 2025-10-08 16:29:18.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:18 compute-0 nova_compute[117413]: 2025-10-08 16:29:18.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:18 compute-0 nova_compute[117413]: 2025-10-08 16:29:18.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:18.678 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[edb601c1-50bb-4180-b557-a771be2e665e]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:18 compute-0 systemd-udevd[147583]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:29:18 compute-0 systemd-machined[77548]: New machine qemu-13-instance-00000010.
Oct 08 16:29:18 compute-0 NetworkManager[1034]: <info>  [1759940958.7086] device (tapd4440888-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:29:18 compute-0 NetworkManager[1034]: <info>  [1759940958.7093] device (tapd4440888-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:29:18 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-00000010.
Oct 08 16:29:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:18.724 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[ac4c2ed1-a791-4768-99fd-430c43aabc1e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:18.727 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6a4a9e-e30c-4fc4-93bd-8ddcf11043e2]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:18.778 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[6d804709-0b89-4d88-89f0-7774c973a231]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:18.800 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[58052855-440f-4311-8790-b3828d9c642d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeaa04398-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:0e:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 221912, 'reachable_time': 35480, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 147595, 'error': None, 'target': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:18.823 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[83684eea-6381-44f8-9808-8d6a3f79bb5a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapeaa04398-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 221924, 'tstamp': 221924}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 147596, 'error': None, 'target': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapeaa04398-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 221927, 'tstamp': 221927}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 147596, 'error': None, 'target': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:18.824 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeaa04398-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:29:18 compute-0 nova_compute[117413]: 2025-10-08 16:29:18.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:18 compute-0 nova_compute[117413]: 2025-10-08 16:29:18.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:18.828 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeaa04398-50, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:29:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:18.828 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:29:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:18.829 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeaa04398-50, col_values=(('external_ids', {'iface-id': '4ea2ba2c-a72c-4801-bc39-9e3f03f24d1c'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:29:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:18.829 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:29:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:18.831 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3279b7-8d7f-4a6c-b189-bd14b5ec7cb3]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-eaa04398-576e-4a18-a2fe-a6a0b2d52eea\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID eaa04398-576e-4a18-a2fe-a6a0b2d52eea\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:20 compute-0 podman[147612]: 2025-10-08 16:29:20.479917406 +0000 UTC m=+0.077095950 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, release=1755695350, version=9.6, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, vcs-type=git, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 08 16:29:21 compute-0 nova_compute[117413]: 2025-10-08 16:29:21.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:22 compute-0 ovn_controller[19768]: 2025-10-08T16:29:22Z|00143|binding|INFO|Claiming lport d4440888-e536-4b06-ba7d-515f11bd5f93 for this chassis.
Oct 08 16:29:22 compute-0 ovn_controller[19768]: 2025-10-08T16:29:22Z|00144|binding|INFO|d4440888-e536-4b06-ba7d-515f11bd5f93: Claiming fa:16:3e:31:3a:8e 10.100.0.4
Oct 08 16:29:22 compute-0 ovn_controller[19768]: 2025-10-08T16:29:22Z|00145|binding|INFO|Setting lport d4440888-e536-4b06-ba7d-515f11bd5f93 up in Southbound
Oct 08 16:29:23 compute-0 nova_compute[117413]: 2025-10-08 16:29:23.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:24 compute-0 nova_compute[117413]: 2025-10-08 16:29:24.072 2 INFO nova.compute.manager [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Post operation of migration started
Oct 08 16:29:24 compute-0 nova_compute[117413]: 2025-10-08 16:29:24.073 2 WARNING neutronclient.v2_0.client [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:29:25 compute-0 nova_compute[117413]: 2025-10-08 16:29:25.014 2 WARNING neutronclient.v2_0.client [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:29:25 compute-0 nova_compute[117413]: 2025-10-08 16:29:25.014 2 WARNING neutronclient.v2_0.client [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:29:25 compute-0 nova_compute[117413]: 2025-10-08 16:29:25.111 2 DEBUG oslo_concurrency.lockutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-486c5fd5-76da-48a7-9a11-e404ccb4cfba" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:29:25 compute-0 nova_compute[117413]: 2025-10-08 16:29:25.111 2 DEBUG oslo_concurrency.lockutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-486c5fd5-76da-48a7-9a11-e404ccb4cfba" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:29:25 compute-0 nova_compute[117413]: 2025-10-08 16:29:25.112 2 DEBUG nova.network.neutron [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:29:25 compute-0 nova_compute[117413]: 2025-10-08 16:29:25.618 2 WARNING neutronclient.v2_0.client [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:29:26 compute-0 nova_compute[117413]: 2025-10-08 16:29:26.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:26 compute-0 nova_compute[117413]: 2025-10-08 16:29:26.277 2 WARNING neutronclient.v2_0.client [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:29:26 compute-0 nova_compute[117413]: 2025-10-08 16:29:26.420 2 DEBUG nova.network.neutron [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Updating instance_info_cache with network_info: [{"id": "d4440888-e536-4b06-ba7d-515f11bd5f93", "address": "fa:16:3e:31:3a:8e", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4440888-e5", "ovs_interfaceid": "d4440888-e536-4b06-ba7d-515f11bd5f93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:29:26 compute-0 podman[147638]: 2025-10-08 16:29:26.470972058 +0000 UTC m=+0.079515559 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 08 16:29:26 compute-0 nova_compute[117413]: 2025-10-08 16:29:26.927 2 DEBUG oslo_concurrency.lockutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-486c5fd5-76da-48a7-9a11-e404ccb4cfba" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:29:27 compute-0 nova_compute[117413]: 2025-10-08 16:29:27.448 2 DEBUG oslo_concurrency.lockutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:29:27 compute-0 nova_compute[117413]: 2025-10-08 16:29:27.449 2 DEBUG oslo_concurrency.lockutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:29:27 compute-0 nova_compute[117413]: 2025-10-08 16:29:27.449 2 DEBUG oslo_concurrency.lockutils [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:29:27 compute-0 nova_compute[117413]: 2025-10-08 16:29:27.456 2 INFO nova.virt.libvirt.driver [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 08 16:29:27 compute-0 virtqemud[117740]: Domain id=13 name='instance-00000010' uuid=486c5fd5-76da-48a7-9a11-e404ccb4cfba is tainted: custom-monitor
Oct 08 16:29:28 compute-0 nova_compute[117413]: 2025-10-08 16:29:28.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:28 compute-0 nova_compute[117413]: 2025-10-08 16:29:28.465 2 INFO nova.virt.libvirt.driver [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 08 16:29:29 compute-0 podman[147658]: 2025-10-08 16:29:29.442094482 +0000 UTC m=+0.049920908 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Oct 08 16:29:29 compute-0 nova_compute[117413]: 2025-10-08 16:29:29.471 2 INFO nova.virt.libvirt.driver [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 08 16:29:29 compute-0 nova_compute[117413]: 2025-10-08 16:29:29.476 2 DEBUG nova.compute.manager [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:29:29 compute-0 podman[127881]: time="2025-10-08T16:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:29:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:29:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3493 "" "Go-http-client/1.1"
Oct 08 16:29:29 compute-0 nova_compute[117413]: 2025-10-08 16:29:29.985 2 DEBUG nova.objects.instance [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Oct 08 16:29:31 compute-0 nova_compute[117413]: 2025-10-08 16:29:31.004 2 WARNING neutronclient.v2_0.client [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:29:31 compute-0 nova_compute[117413]: 2025-10-08 16:29:31.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:31 compute-0 openstack_network_exporter[130039]: ERROR   16:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:29:31 compute-0 openstack_network_exporter[130039]: ERROR   16:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:29:31 compute-0 openstack_network_exporter[130039]: ERROR   16:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:29:31 compute-0 openstack_network_exporter[130039]: ERROR   16:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:29:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:29:31 compute-0 openstack_network_exporter[130039]: ERROR   16:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:29:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:29:31 compute-0 nova_compute[117413]: 2025-10-08 16:29:31.662 2 WARNING neutronclient.v2_0.client [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:29:31 compute-0 nova_compute[117413]: 2025-10-08 16:29:31.664 2 WARNING neutronclient.v2_0.client [None req-d8844381-d934-44a9-888e-7f8aba1a9781 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:29:33 compute-0 nova_compute[117413]: 2025-10-08 16:29:33.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:35 compute-0 podman[147679]: 2025-10-08 16:29:35.47301434 +0000 UTC m=+0.073562328 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:29:35 compute-0 podman[147680]: 2025-10-08 16:29:35.509711517 +0000 UTC m=+0.106016573 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:29:36 compute-0 nova_compute[117413]: 2025-10-08 16:29:36.000 2 DEBUG oslo_concurrency.lockutils [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "bff67805-35b2-4e36-9e58-3785c94133e4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:29:36 compute-0 nova_compute[117413]: 2025-10-08 16:29:36.001 2 DEBUG oslo_concurrency.lockutils [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "bff67805-35b2-4e36-9e58-3785c94133e4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:29:36 compute-0 nova_compute[117413]: 2025-10-08 16:29:36.001 2 DEBUG oslo_concurrency.lockutils [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "bff67805-35b2-4e36-9e58-3785c94133e4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:29:36 compute-0 nova_compute[117413]: 2025-10-08 16:29:36.001 2 DEBUG oslo_concurrency.lockutils [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "bff67805-35b2-4e36-9e58-3785c94133e4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:29:36 compute-0 nova_compute[117413]: 2025-10-08 16:29:36.002 2 DEBUG oslo_concurrency.lockutils [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "bff67805-35b2-4e36-9e58-3785c94133e4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:29:36 compute-0 nova_compute[117413]: 2025-10-08 16:29:36.021 2 INFO nova.compute.manager [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Terminating instance
Oct 08 16:29:36 compute-0 nova_compute[117413]: 2025-10-08 16:29:36.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:36 compute-0 nova_compute[117413]: 2025-10-08 16:29:36.537 2 DEBUG nova.compute.manager [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:29:36 compute-0 kernel: tap385bee6e-74 (unregistering): left promiscuous mode
Oct 08 16:29:36 compute-0 NetworkManager[1034]: <info>  [1759940976.5690] device (tap385bee6e-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:29:36 compute-0 ovn_controller[19768]: 2025-10-08T16:29:36Z|00146|binding|INFO|Releasing lport 385bee6e-74ee-4c23-9048-bca81208f18b from this chassis (sb_readonly=0)
Oct 08 16:29:36 compute-0 ovn_controller[19768]: 2025-10-08T16:29:36Z|00147|binding|INFO|Setting lport 385bee6e-74ee-4c23-9048-bca81208f18b down in Southbound
Oct 08 16:29:36 compute-0 nova_compute[117413]: 2025-10-08 16:29:36.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:36 compute-0 ovn_controller[19768]: 2025-10-08T16:29:36Z|00148|binding|INFO|Removing iface tap385bee6e-74 ovn-installed in OVS
Oct 08 16:29:36 compute-0 nova_compute[117413]: 2025-10-08 16:29:36.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:36.590 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:8f:2d 10.100.0.11'], port_security=['fa:16:3e:8f:8f:2d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'bff67805-35b2-4e36-9e58-3785c94133e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '621f620ded214ac792354cb32ce3de49', 'neutron:revision_number': '5', 'neutron:security_group_ids': '44bcc6c4-216d-4445-a813-dacde8875d66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a57d6524-8805-463f-b41a-d3218c332981, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=385bee6e-74ee-4c23-9048-bca81208f18b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:29:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:36.591 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 385bee6e-74ee-4c23-9048-bca81208f18b in datapath eaa04398-576e-4a18-a2fe-a6a0b2d52eea unbound from our chassis
Oct 08 16:29:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:36.593 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eaa04398-576e-4a18-a2fe-a6a0b2d52eea
Oct 08 16:29:36 compute-0 nova_compute[117413]: 2025-10-08 16:29:36.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:36.618 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[ae6523b8-683d-4b2d-96b4-0d495e341d9c]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:36 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000011.scope: Deactivated successfully.
Oct 08 16:29:36 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000011.scope: Consumed 12.988s CPU time.
Oct 08 16:29:36 compute-0 systemd-machined[77548]: Machine qemu-12-instance-00000011 terminated.
Oct 08 16:29:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:36.673 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[3db4f8da-b70f-4d20-84d1-004a7ebefe83]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:36.676 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[3f78cf2c-fda3-4e62-ab2a-71abec04d9da]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:36.728 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[80dc5d70-ea9d-4300-90f6-33f79350bebf]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:36 compute-0 nova_compute[117413]: 2025-10-08 16:29:36.755 2 DEBUG nova.compute.manager [req-b68b8d11-8bb0-4216-abf8-00f89acd2e2e req-27981ca2-b790-4f5d-ac9e-ed64598d95fc c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Received event network-vif-unplugged-385bee6e-74ee-4c23-9048-bca81208f18b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:29:36 compute-0 nova_compute[117413]: 2025-10-08 16:29:36.755 2 DEBUG oslo_concurrency.lockutils [req-b68b8d11-8bb0-4216-abf8-00f89acd2e2e req-27981ca2-b790-4f5d-ac9e-ed64598d95fc c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "bff67805-35b2-4e36-9e58-3785c94133e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:29:36 compute-0 nova_compute[117413]: 2025-10-08 16:29:36.756 2 DEBUG oslo_concurrency.lockutils [req-b68b8d11-8bb0-4216-abf8-00f89acd2e2e req-27981ca2-b790-4f5d-ac9e-ed64598d95fc c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "bff67805-35b2-4e36-9e58-3785c94133e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:29:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:36.754 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[702fedf5-3172-497a-a7f7-a5559b666d35]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeaa04398-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:0e:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 221912, 'reachable_time': 35480, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 147739, 'error': None, 'target': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:36 compute-0 nova_compute[117413]: 2025-10-08 16:29:36.756 2 DEBUG oslo_concurrency.lockutils [req-b68b8d11-8bb0-4216-abf8-00f89acd2e2e req-27981ca2-b790-4f5d-ac9e-ed64598d95fc c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "bff67805-35b2-4e36-9e58-3785c94133e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:29:36 compute-0 nova_compute[117413]: 2025-10-08 16:29:36.756 2 DEBUG nova.compute.manager [req-b68b8d11-8bb0-4216-abf8-00f89acd2e2e req-27981ca2-b790-4f5d-ac9e-ed64598d95fc c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] No waiting events found dispatching network-vif-unplugged-385bee6e-74ee-4c23-9048-bca81208f18b pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:29:36 compute-0 nova_compute[117413]: 2025-10-08 16:29:36.756 2 DEBUG nova.compute.manager [req-b68b8d11-8bb0-4216-abf8-00f89acd2e2e req-27981ca2-b790-4f5d-ac9e-ed64598d95fc c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Received event network-vif-unplugged-385bee6e-74ee-4c23-9048-bca81208f18b for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:29:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:36.780 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[06d9dec0-b7ea-401d-8298-98c050469cf6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapeaa04398-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 221924, 'tstamp': 221924}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 147743, 'error': None, 'target': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapeaa04398-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 221927, 'tstamp': 221927}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 147743, 'error': None, 'target': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:36.783 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeaa04398-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:29:36 compute-0 nova_compute[117413]: 2025-10-08 16:29:36.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:36 compute-0 nova_compute[117413]: 2025-10-08 16:29:36.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:36.793 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeaa04398-50, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:29:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:36.793 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:29:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:36.794 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeaa04398-50, col_values=(('external_ids', {'iface-id': '4ea2ba2c-a72c-4801-bc39-9e3f03f24d1c'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:29:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:36.794 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:29:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:36.795 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[28ab56b8-4edf-4780-a621-97cdf4a51bf8]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-eaa04398-576e-4a18-a2fe-a6a0b2d52eea\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID eaa04398-576e-4a18-a2fe-a6a0b2d52eea\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:36 compute-0 nova_compute[117413]: 2025-10-08 16:29:36.835 2 INFO nova.virt.libvirt.driver [-] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Instance destroyed successfully.
Oct 08 16:29:36 compute-0 nova_compute[117413]: 2025-10-08 16:29:36.836 2 DEBUG nova.objects.instance [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lazy-loading 'resources' on Instance uuid bff67805-35b2-4e36-9e58-3785c94133e4 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.342 2 DEBUG nova.virt.libvirt.vif [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-08T16:28:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-912600583',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutenoderesourceconsolidationstrategy-server-912',id=17,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:28:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='621f620ded214ac792354cb32ce3de49',ramdisk_id='',reservation_id='r-2ncfz88w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320',owner_user_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:28:52Z,user_data=None,user_id='a35a495eee564e31a6dce3a5c601665c',uuid=bff67805-35b2-4e36-9e58-3785c94133e4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "385bee6e-74ee-4c23-9048-bca81208f18b", "address": "fa:16:3e:8f:8f:2d", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap385bee6e-74", "ovs_interfaceid": "385bee6e-74ee-4c23-9048-bca81208f18b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.342 2 DEBUG nova.network.os_vif_util [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Converting VIF {"id": "385bee6e-74ee-4c23-9048-bca81208f18b", "address": "fa:16:3e:8f:8f:2d", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap385bee6e-74", "ovs_interfaceid": "385bee6e-74ee-4c23-9048-bca81208f18b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.343 2 DEBUG nova.network.os_vif_util [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:8f:2d,bridge_name='br-int',has_traffic_filtering=True,id=385bee6e-74ee-4c23-9048-bca81208f18b,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap385bee6e-74') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.343 2 DEBUG os_vif [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:8f:2d,bridge_name='br-int',has_traffic_filtering=True,id=385bee6e-74ee-4c23-9048-bca81208f18b,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap385bee6e-74') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.345 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap385bee6e-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.352 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=338709d2-33f1-4732-9420-176321203fbf) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.357 2 INFO os_vif [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:8f:2d,bridge_name='br-int',has_traffic_filtering=True,id=385bee6e-74ee-4c23-9048-bca81208f18b,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap385bee6e-74')
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.357 2 INFO nova.virt.libvirt.driver [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Deleting instance files /var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4_del
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.358 2 INFO nova.virt.libvirt.driver [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Deletion of /var/lib/nova/instances/bff67805-35b2-4e36-9e58-3785c94133e4_del complete
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.871 2 INFO nova.compute.manager [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Took 1.33 seconds to destroy the instance on the hypervisor.
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.871 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.872 2 DEBUG nova.compute.manager [-] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.872 2 DEBUG nova.network.neutron [-] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.872 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:29:37 compute-0 nova_compute[117413]: 2025-10-08 16:29:37.994 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:29:38 compute-0 nova_compute[117413]: 2025-10-08 16:29:38.728 2 DEBUG nova.network.neutron [-] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:29:38 compute-0 nova_compute[117413]: 2025-10-08 16:29:38.801 2 DEBUG nova.compute.manager [req-d7e605c7-152a-43bc-b816-c316a7dde2c5 req-2c7f389b-1387-43cd-8120-c43a3200adb9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Received event network-vif-unplugged-385bee6e-74ee-4c23-9048-bca81208f18b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:29:38 compute-0 nova_compute[117413]: 2025-10-08 16:29:38.801 2 DEBUG oslo_concurrency.lockutils [req-d7e605c7-152a-43bc-b816-c316a7dde2c5 req-2c7f389b-1387-43cd-8120-c43a3200adb9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "bff67805-35b2-4e36-9e58-3785c94133e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:29:38 compute-0 nova_compute[117413]: 2025-10-08 16:29:38.802 2 DEBUG oslo_concurrency.lockutils [req-d7e605c7-152a-43bc-b816-c316a7dde2c5 req-2c7f389b-1387-43cd-8120-c43a3200adb9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "bff67805-35b2-4e36-9e58-3785c94133e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:29:38 compute-0 nova_compute[117413]: 2025-10-08 16:29:38.802 2 DEBUG oslo_concurrency.lockutils [req-d7e605c7-152a-43bc-b816-c316a7dde2c5 req-2c7f389b-1387-43cd-8120-c43a3200adb9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "bff67805-35b2-4e36-9e58-3785c94133e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:29:38 compute-0 nova_compute[117413]: 2025-10-08 16:29:38.802 2 DEBUG nova.compute.manager [req-d7e605c7-152a-43bc-b816-c316a7dde2c5 req-2c7f389b-1387-43cd-8120-c43a3200adb9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] No waiting events found dispatching network-vif-unplugged-385bee6e-74ee-4c23-9048-bca81208f18b pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:29:38 compute-0 nova_compute[117413]: 2025-10-08 16:29:38.802 2 DEBUG nova.compute.manager [req-d7e605c7-152a-43bc-b816-c316a7dde2c5 req-2c7f389b-1387-43cd-8120-c43a3200adb9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Received event network-vif-unplugged-385bee6e-74ee-4c23-9048-bca81208f18b for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:29:38 compute-0 nova_compute[117413]: 2025-10-08 16:29:38.802 2 DEBUG nova.compute.manager [req-d7e605c7-152a-43bc-b816-c316a7dde2c5 req-2c7f389b-1387-43cd-8120-c43a3200adb9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Received event network-vif-deleted-385bee6e-74ee-4c23-9048-bca81208f18b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:29:39 compute-0 nova_compute[117413]: 2025-10-08 16:29:39.235 2 INFO nova.compute.manager [-] [instance: bff67805-35b2-4e36-9e58-3785c94133e4] Took 1.36 seconds to deallocate network for instance.
Oct 08 16:29:39 compute-0 nova_compute[117413]: 2025-10-08 16:29:39.761 2 DEBUG oslo_concurrency.lockutils [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:29:39 compute-0 nova_compute[117413]: 2025-10-08 16:29:39.762 2 DEBUG oslo_concurrency.lockutils [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:29:39 compute-0 nova_compute[117413]: 2025-10-08 16:29:39.839 2 DEBUG nova.compute.provider_tree [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:29:40 compute-0 nova_compute[117413]: 2025-10-08 16:29:40.348 2 DEBUG nova.scheduler.client.report [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:29:40 compute-0 nova_compute[117413]: 2025-10-08 16:29:40.862 2 DEBUG oslo_concurrency.lockutils [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.100s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:29:40 compute-0 nova_compute[117413]: 2025-10-08 16:29:40.883 2 INFO nova.scheduler.client.report [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Deleted allocations for instance bff67805-35b2-4e36-9e58-3785c94133e4
Oct 08 16:29:41 compute-0 nova_compute[117413]: 2025-10-08 16:29:41.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:41.910 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:29:41 compute-0 nova_compute[117413]: 2025-10-08 16:29:41.911 2 DEBUG oslo_concurrency.lockutils [None req-6c7ea73a-0135-4d8c-88a3-4d35a31f827a a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "bff67805-35b2-4e36-9e58-3785c94133e4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.910s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:29:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:41.911 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:29:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:41.912 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:29:42 compute-0 nova_compute[117413]: 2025-10-08 16:29:42.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:43 compute-0 nova_compute[117413]: 2025-10-08 16:29:43.370 2 DEBUG oslo_concurrency.lockutils [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "486c5fd5-76da-48a7-9a11-e404ccb4cfba" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:29:43 compute-0 nova_compute[117413]: 2025-10-08 16:29:43.371 2 DEBUG oslo_concurrency.lockutils [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "486c5fd5-76da-48a7-9a11-e404ccb4cfba" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:29:43 compute-0 nova_compute[117413]: 2025-10-08 16:29:43.372 2 DEBUG oslo_concurrency.lockutils [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "486c5fd5-76da-48a7-9a11-e404ccb4cfba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:29:43 compute-0 nova_compute[117413]: 2025-10-08 16:29:43.372 2 DEBUG oslo_concurrency.lockutils [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "486c5fd5-76da-48a7-9a11-e404ccb4cfba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:29:43 compute-0 nova_compute[117413]: 2025-10-08 16:29:43.373 2 DEBUG oslo_concurrency.lockutils [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "486c5fd5-76da-48a7-9a11-e404ccb4cfba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:29:43 compute-0 nova_compute[117413]: 2025-10-08 16:29:43.388 2 INFO nova.compute.manager [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Terminating instance
Oct 08 16:29:43 compute-0 nova_compute[117413]: 2025-10-08 16:29:43.911 2 DEBUG nova.compute.manager [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:29:43 compute-0 kernel: tapd4440888-e5 (unregistering): left promiscuous mode
Oct 08 16:29:43 compute-0 NetworkManager[1034]: <info>  [1759940983.9403] device (tapd4440888-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:29:43 compute-0 ovn_controller[19768]: 2025-10-08T16:29:43Z|00149|binding|INFO|Releasing lport d4440888-e536-4b06-ba7d-515f11bd5f93 from this chassis (sb_readonly=0)
Oct 08 16:29:43 compute-0 ovn_controller[19768]: 2025-10-08T16:29:43Z|00150|binding|INFO|Setting lport d4440888-e536-4b06-ba7d-515f11bd5f93 down in Southbound
Oct 08 16:29:43 compute-0 ovn_controller[19768]: 2025-10-08T16:29:43Z|00151|binding|INFO|Removing iface tapd4440888-e5 ovn-installed in OVS
Oct 08 16:29:43 compute-0 nova_compute[117413]: 2025-10-08 16:29:43.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:43 compute-0 nova_compute[117413]: 2025-10-08 16:29:43.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:43.957 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:3a:8e 10.100.0.4'], port_security=['fa:16:3e:31:3a:8e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '486c5fd5-76da-48a7-9a11-e404ccb4cfba', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '621f620ded214ac792354cb32ce3de49', 'neutron:revision_number': '15', 'neutron:security_group_ids': '44bcc6c4-216d-4445-a813-dacde8875d66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a57d6524-8805-463f-b41a-d3218c332981, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=d4440888-e536-4b06-ba7d-515f11bd5f93) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:29:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:43.959 28633 INFO neutron.agent.ovn.metadata.agent [-] Port d4440888-e536-4b06-ba7d-515f11bd5f93 in datapath eaa04398-576e-4a18-a2fe-a6a0b2d52eea unbound from our chassis
Oct 08 16:29:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:43.960 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eaa04398-576e-4a18-a2fe-a6a0b2d52eea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:29:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:43.961 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[d269bc07-869b-47eb-b934-8298ef61253c]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:43.961 28633 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea namespace which is not needed anymore
Oct 08 16:29:43 compute-0 nova_compute[117413]: 2025-10-08 16:29:43.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:44 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000010.scope: Deactivated successfully.
Oct 08 16:29:44 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000010.scope: Consumed 3.016s CPU time.
Oct 08 16:29:44 compute-0 systemd-machined[77548]: Machine qemu-13-instance-00000010 terminated.
Oct 08 16:29:44 compute-0 neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea[147382]: [NOTICE]   (147386) : haproxy version is 3.0.5-8e879a5
Oct 08 16:29:44 compute-0 neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea[147382]: [NOTICE]   (147386) : path to executable is /usr/sbin/haproxy
Oct 08 16:29:44 compute-0 neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea[147382]: [WARNING]  (147386) : Exiting Master process...
Oct 08 16:29:44 compute-0 podman[147785]: 2025-10-08 16:29:44.11946809 +0000 UTC m=+0.036925504 container kill 5bb9e46c4ca0c036551668c0b4e667fab3d75c28f60bd1b31e669870a380825e (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 08 16:29:44 compute-0 neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea[147382]: [ALERT]    (147386) : Current worker (147388) exited with code 143 (Terminated)
Oct 08 16:29:44 compute-0 neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea[147382]: [WARNING]  (147386) : All workers exited. Exiting... (0)
Oct 08 16:29:44 compute-0 systemd[1]: libpod-5bb9e46c4ca0c036551668c0b4e667fab3d75c28f60bd1b31e669870a380825e.scope: Deactivated successfully.
Oct 08 16:29:44 compute-0 podman[147801]: 2025-10-08 16:29:44.181948368 +0000 UTC m=+0.037681836 container died 5bb9e46c4ca0c036551668c0b4e667fab3d75c28f60bd1b31e669870a380825e (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251007)
Oct 08 16:29:44 compute-0 nova_compute[117413]: 2025-10-08 16:29:44.192 2 INFO nova.virt.libvirt.driver [-] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Instance destroyed successfully.
Oct 08 16:29:44 compute-0 nova_compute[117413]: 2025-10-08 16:29:44.193 2 DEBUG nova.objects.instance [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lazy-loading 'resources' on Instance uuid 486c5fd5-76da-48a7-9a11-e404ccb4cfba obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:29:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5bb9e46c4ca0c036551668c0b4e667fab3d75c28f60bd1b31e669870a380825e-userdata-shm.mount: Deactivated successfully.
Oct 08 16:29:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdea16b06e802354792209a7595b1265593301b2c0a34411b2fb4d9554823bcd-merged.mount: Deactivated successfully.
Oct 08 16:29:44 compute-0 podman[147801]: 2025-10-08 16:29:44.225769709 +0000 UTC m=+0.081503137 container cleanup 5bb9e46c4ca0c036551668c0b4e667fab3d75c28f60bd1b31e669870a380825e (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 08 16:29:44 compute-0 systemd[1]: libpod-conmon-5bb9e46c4ca0c036551668c0b4e667fab3d75c28f60bd1b31e669870a380825e.scope: Deactivated successfully.
Oct 08 16:29:44 compute-0 podman[147805]: 2025-10-08 16:29:44.242513851 +0000 UTC m=+0.089843947 container remove 5bb9e46c4ca0c036551668c0b4e667fab3d75c28f60bd1b31e669870a380825e (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 08 16:29:44 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:44.251 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[ca117b73-a9fe-4628-9ebb-30aacf73f85f]: (4, ("Wed Oct  8 04:29:44 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea (5bb9e46c4ca0c036551668c0b4e667fab3d75c28f60bd1b31e669870a380825e)\n5bb9e46c4ca0c036551668c0b4e667fab3d75c28f60bd1b31e669870a380825e\nWed Oct  8 04:29:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea (5bb9e46c4ca0c036551668c0b4e667fab3d75c28f60bd1b31e669870a380825e)\n5bb9e46c4ca0c036551668c0b4e667fab3d75c28f60bd1b31e669870a380825e\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:44 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:44.253 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[b97f0c44-2922-44cd-b494-58dbefa74651]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:44 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:44.254 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:29:44 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:44.254 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[8dbb1ce2-632b-4c31-9b57-01aac848d282]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:44 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:44.254 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeaa04398-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:29:44 compute-0 nova_compute[117413]: 2025-10-08 16:29:44.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:44 compute-0 kernel: tapeaa04398-50: left promiscuous mode
Oct 08 16:29:44 compute-0 nova_compute[117413]: 2025-10-08 16:29:44.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:44 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:44.274 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[2c7da98d-6b28-43c9-8711-f281929a7d56]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:44 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:44.304 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[4d50467f-8961-46aa-90e2-78369f4de8d9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:44 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:44.305 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[04092494-1fdd-41e1-a70c-e22a70a68b11]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:44 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:44.319 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8307b5-4487-4d7b-bb63-4693f2fd6b93]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 221904, 'reachable_time': 19153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 147850, 'error': None, 'target': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:44 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:44.320 28777 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 08 16:29:44 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:29:44.321 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[9b98c7e8-8b1a-459b-87a6-8fded94133a3]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:29:44 compute-0 systemd[1]: run-netns-ovnmeta\x2deaa04398\x2d576e\x2d4a18\x2da2fe\x2da6a0b2d52eea.mount: Deactivated successfully.
Oct 08 16:29:44 compute-0 nova_compute[117413]: 2025-10-08 16:29:44.699 2 DEBUG nova.virt.libvirt.vif [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,compute_id=2,config_drive='True',created_at=2025-10-08T16:28:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-1479976031',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutenoderesourceconsolidationstrategy-server-147',id=16,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:28:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='621f620ded214ac792354cb32ce3de49',ramdisk_id='',reservation_id='r-4i7zhug5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',clean_attempts='1',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320',owner_user_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:29:30Z,user_data=None,user_id='a35a495eee564e31a6dce3a5c601665c',uuid=486c5fd5-76da-48a7-9a11-e404ccb4cfba,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d4440888-e536-4b06-ba7d-515f11bd5f93", "address": "fa:16:3e:31:3a:8e", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4440888-e5", "ovs_interfaceid": "d4440888-e536-4b06-ba7d-515f11bd5f93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:29:44 compute-0 nova_compute[117413]: 2025-10-08 16:29:44.701 2 DEBUG nova.network.os_vif_util [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Converting VIF {"id": "d4440888-e536-4b06-ba7d-515f11bd5f93", "address": "fa:16:3e:31:3a:8e", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4440888-e5", "ovs_interfaceid": "d4440888-e536-4b06-ba7d-515f11bd5f93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:29:44 compute-0 nova_compute[117413]: 2025-10-08 16:29:44.702 2 DEBUG nova.network.os_vif_util [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:31:3a:8e,bridge_name='br-int',has_traffic_filtering=True,id=d4440888-e536-4b06-ba7d-515f11bd5f93,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4440888-e5') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:29:44 compute-0 nova_compute[117413]: 2025-10-08 16:29:44.702 2 DEBUG os_vif [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:31:3a:8e,bridge_name='br-int',has_traffic_filtering=True,id=d4440888-e536-4b06-ba7d-515f11bd5f93,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4440888-e5') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:29:44 compute-0 nova_compute[117413]: 2025-10-08 16:29:44.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:44 compute-0 nova_compute[117413]: 2025-10-08 16:29:44.704 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4440888-e5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:29:44 compute-0 nova_compute[117413]: 2025-10-08 16:29:44.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:44 compute-0 nova_compute[117413]: 2025-10-08 16:29:44.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:29:44 compute-0 nova_compute[117413]: 2025-10-08 16:29:44.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:44 compute-0 nova_compute[117413]: 2025-10-08 16:29:44.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:44 compute-0 nova_compute[117413]: 2025-10-08 16:29:44.709 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=491a690d-1f7a-4496-92af-831efe43be2b) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:29:44 compute-0 nova_compute[117413]: 2025-10-08 16:29:44.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:44 compute-0 nova_compute[117413]: 2025-10-08 16:29:44.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:44 compute-0 nova_compute[117413]: 2025-10-08 16:29:44.714 2 INFO os_vif [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:31:3a:8e,bridge_name='br-int',has_traffic_filtering=True,id=d4440888-e536-4b06-ba7d-515f11bd5f93,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4440888-e5')
Oct 08 16:29:44 compute-0 nova_compute[117413]: 2025-10-08 16:29:44.715 2 INFO nova.virt.libvirt.driver [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Deleting instance files /var/lib/nova/instances/486c5fd5-76da-48a7-9a11-e404ccb4cfba_del
Oct 08 16:29:44 compute-0 nova_compute[117413]: 2025-10-08 16:29:44.715 2 INFO nova.virt.libvirt.driver [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Deletion of /var/lib/nova/instances/486c5fd5-76da-48a7-9a11-e404ccb4cfba_del complete
Oct 08 16:29:45 compute-0 nova_compute[117413]: 2025-10-08 16:29:45.088 2 DEBUG nova.compute.manager [req-44b694ae-f3f1-4b40-aa51-9b3c2678075d req-b96e75c3-b0b4-4be0-8ef1-d59572383027 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Received event network-vif-unplugged-d4440888-e536-4b06-ba7d-515f11bd5f93 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:29:45 compute-0 nova_compute[117413]: 2025-10-08 16:29:45.089 2 DEBUG oslo_concurrency.lockutils [req-44b694ae-f3f1-4b40-aa51-9b3c2678075d req-b96e75c3-b0b4-4be0-8ef1-d59572383027 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "486c5fd5-76da-48a7-9a11-e404ccb4cfba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:29:45 compute-0 nova_compute[117413]: 2025-10-08 16:29:45.089 2 DEBUG oslo_concurrency.lockutils [req-44b694ae-f3f1-4b40-aa51-9b3c2678075d req-b96e75c3-b0b4-4be0-8ef1-d59572383027 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "486c5fd5-76da-48a7-9a11-e404ccb4cfba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:29:45 compute-0 nova_compute[117413]: 2025-10-08 16:29:45.090 2 DEBUG oslo_concurrency.lockutils [req-44b694ae-f3f1-4b40-aa51-9b3c2678075d req-b96e75c3-b0b4-4be0-8ef1-d59572383027 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "486c5fd5-76da-48a7-9a11-e404ccb4cfba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:29:45 compute-0 nova_compute[117413]: 2025-10-08 16:29:45.090 2 DEBUG nova.compute.manager [req-44b694ae-f3f1-4b40-aa51-9b3c2678075d req-b96e75c3-b0b4-4be0-8ef1-d59572383027 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] No waiting events found dispatching network-vif-unplugged-d4440888-e536-4b06-ba7d-515f11bd5f93 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:29:45 compute-0 nova_compute[117413]: 2025-10-08 16:29:45.090 2 DEBUG nova.compute.manager [req-44b694ae-f3f1-4b40-aa51-9b3c2678075d req-b96e75c3-b0b4-4be0-8ef1-d59572383027 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Received event network-vif-unplugged-d4440888-e536-4b06-ba7d-515f11bd5f93 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:29:45 compute-0 nova_compute[117413]: 2025-10-08 16:29:45.225 2 INFO nova.compute.manager [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Took 1.31 seconds to destroy the instance on the hypervisor.
Oct 08 16:29:45 compute-0 nova_compute[117413]: 2025-10-08 16:29:45.225 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:29:45 compute-0 nova_compute[117413]: 2025-10-08 16:29:45.226 2 DEBUG nova.compute.manager [-] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:29:45 compute-0 nova_compute[117413]: 2025-10-08 16:29:45.226 2 DEBUG nova.network.neutron [-] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:29:45 compute-0 nova_compute[117413]: 2025-10-08 16:29:45.226 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:29:46 compute-0 nova_compute[117413]: 2025-10-08 16:29:46.038 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:29:46 compute-0 nova_compute[117413]: 2025-10-08 16:29:46.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:46 compute-0 podman[147851]: 2025-10-08 16:29:46.49221042 +0000 UTC m=+0.092197694 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.build-date=20251007, container_name=multipathd, io.buildah.version=1.41.4, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 08 16:29:47 compute-0 nova_compute[117413]: 2025-10-08 16:29:47.157 2 DEBUG nova.compute.manager [req-15caa7eb-a09e-4f08-b10b-c2a75f44e66d req-fdfe9292-0a43-4451-ab58-2f3685a79dd1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Received event network-vif-unplugged-d4440888-e536-4b06-ba7d-515f11bd5f93 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:29:47 compute-0 nova_compute[117413]: 2025-10-08 16:29:47.158 2 DEBUG oslo_concurrency.lockutils [req-15caa7eb-a09e-4f08-b10b-c2a75f44e66d req-fdfe9292-0a43-4451-ab58-2f3685a79dd1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "486c5fd5-76da-48a7-9a11-e404ccb4cfba-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:29:47 compute-0 nova_compute[117413]: 2025-10-08 16:29:47.158 2 DEBUG oslo_concurrency.lockutils [req-15caa7eb-a09e-4f08-b10b-c2a75f44e66d req-fdfe9292-0a43-4451-ab58-2f3685a79dd1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "486c5fd5-76da-48a7-9a11-e404ccb4cfba-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:29:47 compute-0 nova_compute[117413]: 2025-10-08 16:29:47.158 2 DEBUG oslo_concurrency.lockutils [req-15caa7eb-a09e-4f08-b10b-c2a75f44e66d req-fdfe9292-0a43-4451-ab58-2f3685a79dd1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "486c5fd5-76da-48a7-9a11-e404ccb4cfba-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:29:47 compute-0 nova_compute[117413]: 2025-10-08 16:29:47.159 2 DEBUG nova.compute.manager [req-15caa7eb-a09e-4f08-b10b-c2a75f44e66d req-fdfe9292-0a43-4451-ab58-2f3685a79dd1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] No waiting events found dispatching network-vif-unplugged-d4440888-e536-4b06-ba7d-515f11bd5f93 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:29:47 compute-0 nova_compute[117413]: 2025-10-08 16:29:47.159 2 DEBUG nova.compute.manager [req-15caa7eb-a09e-4f08-b10b-c2a75f44e66d req-fdfe9292-0a43-4451-ab58-2f3685a79dd1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Received event network-vif-unplugged-d4440888-e536-4b06-ba7d-515f11bd5f93 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:29:47 compute-0 nova_compute[117413]: 2025-10-08 16:29:47.159 2 DEBUG nova.compute.manager [req-15caa7eb-a09e-4f08-b10b-c2a75f44e66d req-fdfe9292-0a43-4451-ab58-2f3685a79dd1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Received event network-vif-deleted-d4440888-e536-4b06-ba7d-515f11bd5f93 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:29:47 compute-0 nova_compute[117413]: 2025-10-08 16:29:47.159 2 INFO nova.compute.manager [req-15caa7eb-a09e-4f08-b10b-c2a75f44e66d req-fdfe9292-0a43-4451-ab58-2f3685a79dd1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Neutron deleted interface d4440888-e536-4b06-ba7d-515f11bd5f93; detaching it from the instance and deleting it from the info cache
Oct 08 16:29:47 compute-0 nova_compute[117413]: 2025-10-08 16:29:47.159 2 DEBUG nova.network.neutron [req-15caa7eb-a09e-4f08-b10b-c2a75f44e66d req-fdfe9292-0a43-4451-ab58-2f3685a79dd1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:29:47 compute-0 nova_compute[117413]: 2025-10-08 16:29:47.207 2 DEBUG nova.network.neutron [-] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:29:47 compute-0 nova_compute[117413]: 2025-10-08 16:29:47.667 2 DEBUG nova.compute.manager [req-15caa7eb-a09e-4f08-b10b-c2a75f44e66d req-fdfe9292-0a43-4451-ab58-2f3685a79dd1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Detach interface failed, port_id=d4440888-e536-4b06-ba7d-515f11bd5f93, reason: Instance 486c5fd5-76da-48a7-9a11-e404ccb4cfba could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 08 16:29:47 compute-0 nova_compute[117413]: 2025-10-08 16:29:47.715 2 INFO nova.compute.manager [-] [instance: 486c5fd5-76da-48a7-9a11-e404ccb4cfba] Took 2.49 seconds to deallocate network for instance.
Oct 08 16:29:48 compute-0 nova_compute[117413]: 2025-10-08 16:29:48.240 2 DEBUG oslo_concurrency.lockutils [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:29:48 compute-0 nova_compute[117413]: 2025-10-08 16:29:48.241 2 DEBUG oslo_concurrency.lockutils [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:29:48 compute-0 nova_compute[117413]: 2025-10-08 16:29:48.247 2 DEBUG oslo_concurrency.lockutils [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:29:48 compute-0 nova_compute[117413]: 2025-10-08 16:29:48.279 2 INFO nova.scheduler.client.report [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Deleted allocations for instance 486c5fd5-76da-48a7-9a11-e404ccb4cfba
Oct 08 16:29:49 compute-0 nova_compute[117413]: 2025-10-08 16:29:49.309 2 DEBUG oslo_concurrency.lockutils [None req-b201a49b-2b01-454e-b2d3-782d2a58cf67 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "486c5fd5-76da-48a7-9a11-e404ccb4cfba" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.937s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:29:49 compute-0 nova_compute[117413]: 2025-10-08 16:29:49.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:51 compute-0 nova_compute[117413]: 2025-10-08 16:29:51.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:51 compute-0 podman[147870]: 2025-10-08 16:29:51.487214925 +0000 UTC m=+0.085043789 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible)
Oct 08 16:29:54 compute-0 nova_compute[117413]: 2025-10-08 16:29:54.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:56 compute-0 nova_compute[117413]: 2025-10-08 16:29:56.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:57 compute-0 podman[147891]: 2025-10-08 16:29:57.495076811 +0000 UTC m=+0.101362529 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Oct 08 16:29:59 compute-0 nova_compute[117413]: 2025-10-08 16:29:59.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:29:59 compute-0 podman[127881]: time="2025-10-08T16:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:29:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:29:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3023 "" "Go-http-client/1.1"
Oct 08 16:30:00 compute-0 systemd[1]: Starting system activity accounting tool...
Oct 08 16:30:00 compute-0 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct 08 16:30:00 compute-0 systemd[1]: Finished system activity accounting tool.
Oct 08 16:30:00 compute-0 podman[147911]: 2025-10-08 16:30:00.482214716 +0000 UTC m=+0.071684374 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 08 16:30:01 compute-0 nova_compute[117413]: 2025-10-08 16:30:01.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:01 compute-0 openstack_network_exporter[130039]: ERROR   16:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:30:01 compute-0 openstack_network_exporter[130039]: ERROR   16:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:30:01 compute-0 openstack_network_exporter[130039]: ERROR   16:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:30:01 compute-0 openstack_network_exporter[130039]: ERROR   16:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:30:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:30:01 compute-0 openstack_network_exporter[130039]: ERROR   16:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:30:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:30:04 compute-0 nova_compute[117413]: 2025-10-08 16:30:04.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:04 compute-0 nova_compute[117413]: 2025-10-08 16:30:04.864 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:30:06 compute-0 nova_compute[117413]: 2025-10-08 16:30:06.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:06 compute-0 podman[147932]: 2025-10-08 16:30:06.439675881 +0000 UTC m=+0.050949547 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 08 16:30:06 compute-0 podman[147933]: 2025-10-08 16:30:06.493203072 +0000 UTC m=+0.101246665 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 16:30:07 compute-0 nova_compute[117413]: 2025-10-08 16:30:07.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:30:07 compute-0 nova_compute[117413]: 2025-10-08 16:30:07.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:30:07 compute-0 nova_compute[117413]: 2025-10-08 16:30:07.364 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:30:07 compute-0 nova_compute[117413]: 2025-10-08 16:30:07.364 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:30:07 compute-0 nova_compute[117413]: 2025-10-08 16:30:07.364 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:30:07 compute-0 nova_compute[117413]: 2025-10-08 16:30:07.896 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:30:07 compute-0 nova_compute[117413]: 2025-10-08 16:30:07.897 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:30:07 compute-0 nova_compute[117413]: 2025-10-08 16:30:07.897 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:30:07 compute-0 nova_compute[117413]: 2025-10-08 16:30:07.897 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:30:08 compute-0 nova_compute[117413]: 2025-10-08 16:30:08.069 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:30:08 compute-0 nova_compute[117413]: 2025-10-08 16:30:08.070 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:30:08 compute-0 nova_compute[117413]: 2025-10-08 16:30:08.101 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.031s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:30:08 compute-0 nova_compute[117413]: 2025-10-08 16:30:08.103 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6139MB free_disk=73.25067901611328GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:30:08 compute-0 nova_compute[117413]: 2025-10-08 16:30:08.103 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:30:08 compute-0 nova_compute[117413]: 2025-10-08 16:30:08.104 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:30:09 compute-0 nova_compute[117413]: 2025-10-08 16:30:09.179 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:30:09 compute-0 nova_compute[117413]: 2025-10-08 16:30:09.180 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:30:08 up 38 min,  0 user,  load average: 0.26, 0.25, 0.26\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:30:09 compute-0 nova_compute[117413]: 2025-10-08 16:30:09.215 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing inventories for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Oct 08 16:30:09 compute-0 nova_compute[117413]: 2025-10-08 16:30:09.263 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating ProviderTree inventory for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Oct 08 16:30:09 compute-0 nova_compute[117413]: 2025-10-08 16:30:09.264 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating inventory in ProviderTree for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 08 16:30:09 compute-0 nova_compute[117413]: 2025-10-08 16:30:09.279 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing aggregate associations for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Oct 08 16:30:09 compute-0 nova_compute[117413]: 2025-10-08 16:30:09.298 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing trait associations for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8, traits: HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIRTIO_PACKED,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI2,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_SB16,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_ARCH_X86_64,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_SOUND_MODEL_ES1370,HW_ARCH_X86_64,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_VIRTIO_FS,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_TIS,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,HW_CPU_X86_MMX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_IGB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOUND_MODEL_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_SOUND_MODEL_AC97,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ADDRESS_SPACE_EMULATED,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOUND_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_CRB,HW_CPU_X86_SSE42,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_STORAGE_BUS_IDE _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Oct 08 16:30:09 compute-0 nova_compute[117413]: 2025-10-08 16:30:09.318 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:30:09 compute-0 nova_compute[117413]: 2025-10-08 16:30:09.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:09 compute-0 nova_compute[117413]: 2025-10-08 16:30:09.826 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:30:10 compute-0 nova_compute[117413]: 2025-10-08 16:30:10.334 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:30:10 compute-0 nova_compute[117413]: 2025-10-08 16:30:10.334 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.230s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:30:11 compute-0 nova_compute[117413]: 2025-10-08 16:30:11.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:11 compute-0 nova_compute[117413]: 2025-10-08 16:30:11.332 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:30:11 compute-0 nova_compute[117413]: 2025-10-08 16:30:11.332 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:30:11 compute-0 nova_compute[117413]: 2025-10-08 16:30:11.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:30:14 compute-0 nova_compute[117413]: 2025-10-08 16:30:14.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:16 compute-0 nova_compute[117413]: 2025-10-08 16:30:16.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:17 compute-0 nova_compute[117413]: 2025-10-08 16:30:17.358 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:30:17 compute-0 podman[147985]: 2025-10-08 16:30:17.451714745 +0000 UTC m=+0.057813155 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:30:19 compute-0 nova_compute[117413]: 2025-10-08 16:30:19.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:21 compute-0 nova_compute[117413]: 2025-10-08 16:30:21.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:22 compute-0 podman[148005]: 2025-10-08 16:30:22.498131278 +0000 UTC m=+0.089725723 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6)
Oct 08 16:30:23 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:23.946 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:30:23 compute-0 nova_compute[117413]: 2025-10-08 16:30:23.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:23 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:23.947 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:30:24 compute-0 nova_compute[117413]: 2025-10-08 16:30:24.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:26 compute-0 nova_compute[117413]: 2025-10-08 16:30:26.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:27 compute-0 unix_chkpwd[148030]: password check failed for user (root)
Oct 08 16:30:27 compute-0 sshd-session[148028]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 08 16:30:28 compute-0 podman[148031]: 2025-10-08 16:30:28.465463188 +0000 UTC m=+0.066728321 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_id=iscsid, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:30:29 compute-0 sshd-session[148028]: Failed password for root from 193.46.255.244 port 11374 ssh2
Oct 08 16:30:29 compute-0 nova_compute[117413]: 2025-10-08 16:30:29.537 2 DEBUG oslo_concurrency.lockutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "0d360634-7168-4ee5-af98-052f1a3003a3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:30:29 compute-0 nova_compute[117413]: 2025-10-08 16:30:29.537 2 DEBUG oslo_concurrency.lockutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "0d360634-7168-4ee5-af98-052f1a3003a3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:30:29 compute-0 unix_chkpwd[148051]: password check failed for user (root)
Oct 08 16:30:29 compute-0 nova_compute[117413]: 2025-10-08 16:30:29.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:29 compute-0 podman[127881]: time="2025-10-08T16:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:30:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:30:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3024 "" "Go-http-client/1.1"
Oct 08 16:30:30 compute-0 nova_compute[117413]: 2025-10-08 16:30:30.043 2 DEBUG nova.compute.manager [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 08 16:30:30 compute-0 nova_compute[117413]: 2025-10-08 16:30:30.588 2 DEBUG oslo_concurrency.lockutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:30:30 compute-0 nova_compute[117413]: 2025-10-08 16:30:30.589 2 DEBUG oslo_concurrency.lockutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:30:30 compute-0 nova_compute[117413]: 2025-10-08 16:30:30.598 2 DEBUG nova.virt.hardware [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 08 16:30:30 compute-0 nova_compute[117413]: 2025-10-08 16:30:30.599 2 INFO nova.compute.claims [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Claim successful on node compute-0.ctlplane.example.com
Oct 08 16:30:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:30.949 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:30:31 compute-0 nova_compute[117413]: 2025-10-08 16:30:31.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:31 compute-0 openstack_network_exporter[130039]: ERROR   16:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:30:31 compute-0 openstack_network_exporter[130039]: ERROR   16:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:30:31 compute-0 openstack_network_exporter[130039]: ERROR   16:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:30:31 compute-0 openstack_network_exporter[130039]: ERROR   16:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:30:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:30:31 compute-0 openstack_network_exporter[130039]: ERROR   16:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:30:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:30:31 compute-0 podman[148052]: 2025-10-08 16:30:31.456822794 +0000 UTC m=+0.066705390 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:30:31 compute-0 nova_compute[117413]: 2025-10-08 16:30:31.646 2 DEBUG nova.compute.provider_tree [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:30:32 compute-0 sshd-session[148028]: Failed password for root from 193.46.255.244 port 11374 ssh2
Oct 08 16:30:32 compute-0 nova_compute[117413]: 2025-10-08 16:30:32.153 2 DEBUG nova.scheduler.client.report [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:30:32 compute-0 nova_compute[117413]: 2025-10-08 16:30:32.663 2 DEBUG oslo_concurrency.lockutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.074s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:30:32 compute-0 nova_compute[117413]: 2025-10-08 16:30:32.664 2 DEBUG nova.compute.manager [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 08 16:30:33 compute-0 nova_compute[117413]: 2025-10-08 16:30:33.176 2 DEBUG nova.compute.manager [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 08 16:30:33 compute-0 nova_compute[117413]: 2025-10-08 16:30:33.177 2 DEBUG nova.network.neutron [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 08 16:30:33 compute-0 nova_compute[117413]: 2025-10-08 16:30:33.177 2 WARNING neutronclient.v2_0.client [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:30:33 compute-0 nova_compute[117413]: 2025-10-08 16:30:33.178 2 WARNING neutronclient.v2_0.client [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:30:33 compute-0 unix_chkpwd[148072]: password check failed for user (root)
Oct 08 16:30:33 compute-0 nova_compute[117413]: 2025-10-08 16:30:33.688 2 INFO nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 08 16:30:33 compute-0 nova_compute[117413]: 2025-10-08 16:30:33.707 2 DEBUG nova.network.neutron [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Successfully created port: 2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 08 16:30:34 compute-0 nova_compute[117413]: 2025-10-08 16:30:34.195 2 DEBUG nova.compute.manager [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 08 16:30:34 compute-0 nova_compute[117413]: 2025-10-08 16:30:34.285 2 DEBUG nova.network.neutron [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Successfully updated port: 2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 08 16:30:34 compute-0 nova_compute[117413]: 2025-10-08 16:30:34.343 2 DEBUG nova.compute.manager [req-fe6e454e-9ced-4d7f-8ee5-a7bcd516def4 req-07da1fe0-1de8-4855-ab52-73d2987ad43c c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Received event network-changed-2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:30:34 compute-0 nova_compute[117413]: 2025-10-08 16:30:34.343 2 DEBUG nova.compute.manager [req-fe6e454e-9ced-4d7f-8ee5-a7bcd516def4 req-07da1fe0-1de8-4855-ab52-73d2987ad43c c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Refreshing instance network info cache due to event network-changed-2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 08 16:30:34 compute-0 nova_compute[117413]: 2025-10-08 16:30:34.343 2 DEBUG oslo_concurrency.lockutils [req-fe6e454e-9ced-4d7f-8ee5-a7bcd516def4 req-07da1fe0-1de8-4855-ab52-73d2987ad43c c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-0d360634-7168-4ee5-af98-052f1a3003a3" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:30:34 compute-0 nova_compute[117413]: 2025-10-08 16:30:34.344 2 DEBUG oslo_concurrency.lockutils [req-fe6e454e-9ced-4d7f-8ee5-a7bcd516def4 req-07da1fe0-1de8-4855-ab52-73d2987ad43c c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-0d360634-7168-4ee5-af98-052f1a3003a3" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:30:34 compute-0 nova_compute[117413]: 2025-10-08 16:30:34.344 2 DEBUG nova.network.neutron [req-fe6e454e-9ced-4d7f-8ee5-a7bcd516def4 req-07da1fe0-1de8-4855-ab52-73d2987ad43c c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Refreshing network info cache for port 2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 08 16:30:34 compute-0 nova_compute[117413]: 2025-10-08 16:30:34.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:34 compute-0 nova_compute[117413]: 2025-10-08 16:30:34.792 2 DEBUG oslo_concurrency.lockutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "refresh_cache-0d360634-7168-4ee5-af98-052f1a3003a3" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:30:34 compute-0 nova_compute[117413]: 2025-10-08 16:30:34.850 2 WARNING neutronclient.v2_0.client [req-fe6e454e-9ced-4d7f-8ee5-a7bcd516def4 req-07da1fe0-1de8-4855-ab52-73d2987ad43c c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:30:34 compute-0 nova_compute[117413]: 2025-10-08 16:30:34.911 2 DEBUG nova.network.neutron [req-fe6e454e-9ced-4d7f-8ee5-a7bcd516def4 req-07da1fe0-1de8-4855-ab52-73d2987ad43c c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.213 2 DEBUG nova.compute.manager [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.215 2 DEBUG nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.215 2 INFO nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Creating image(s)
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.216 2 DEBUG oslo_concurrency.lockutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "/var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.216 2 DEBUG oslo_concurrency.lockutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "/var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.216 2 DEBUG oslo_concurrency.lockutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "/var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.217 2 DEBUG oslo_utils.imageutils.format_inspector [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.220 2 DEBUG oslo_utils.imageutils.format_inspector [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.222 2 DEBUG oslo_concurrency.processutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.279 2 DEBUG oslo_concurrency.processutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.280 2 DEBUG oslo_concurrency.lockutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.280 2 DEBUG oslo_concurrency.lockutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.281 2 DEBUG oslo_utils.imageutils.format_inspector [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.284 2 DEBUG oslo_utils.imageutils.format_inspector [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.284 2 DEBUG oslo_concurrency.processutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.336 2 DEBUG oslo_concurrency.processutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.337 2 DEBUG oslo_concurrency.processutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.372 2 DEBUG oslo_concurrency.processutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3/disk 1073741824" returned: 0 in 0.035s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.373 2 DEBUG oslo_concurrency.lockutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.092s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.373 2 DEBUG oslo_concurrency.processutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.430 2 DEBUG oslo_concurrency.processutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.431 2 DEBUG nova.virt.disk.api [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Checking if we can resize image /var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.432 2 DEBUG oslo_concurrency.processutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.524 2 DEBUG oslo_concurrency.processutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.525 2 DEBUG nova.virt.disk.api [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Cannot resize image /var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.525 2 DEBUG nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.525 2 DEBUG nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Ensure instance console log exists: /var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.526 2 DEBUG oslo_concurrency.lockutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.526 2 DEBUG oslo_concurrency.lockutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.526 2 DEBUG oslo_concurrency.lockutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:30:35 compute-0 sshd-session[148028]: Failed password for root from 193.46.255.244 port 11374 ssh2
Oct 08 16:30:35 compute-0 nova_compute[117413]: 2025-10-08 16:30:35.988 2 DEBUG nova.network.neutron [req-fe6e454e-9ced-4d7f-8ee5-a7bcd516def4 req-07da1fe0-1de8-4855-ab52-73d2987ad43c c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:30:36 compute-0 nova_compute[117413]: 2025-10-08 16:30:36.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:36 compute-0 nova_compute[117413]: 2025-10-08 16:30:36.495 2 DEBUG oslo_concurrency.lockutils [req-fe6e454e-9ced-4d7f-8ee5-a7bcd516def4 req-07da1fe0-1de8-4855-ab52-73d2987ad43c c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-0d360634-7168-4ee5-af98-052f1a3003a3" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:30:36 compute-0 nova_compute[117413]: 2025-10-08 16:30:36.496 2 DEBUG oslo_concurrency.lockutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquired lock "refresh_cache-0d360634-7168-4ee5-af98-052f1a3003a3" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:30:36 compute-0 nova_compute[117413]: 2025-10-08 16:30:36.496 2 DEBUG nova.network.neutron [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:30:37 compute-0 nova_compute[117413]: 2025-10-08 16:30:37.074 2 DEBUG nova.network.neutron [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:30:37 compute-0 sshd-session[148028]: Received disconnect from 193.46.255.244 port 11374:11:  [preauth]
Oct 08 16:30:37 compute-0 sshd-session[148028]: Disconnected from authenticating user root 193.46.255.244 port 11374 [preauth]
Oct 08 16:30:37 compute-0 sshd-session[148028]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 08 16:30:37 compute-0 podman[148088]: 2025-10-08 16:30:37.470782596 +0000 UTC m=+0.069019447 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:30:37 compute-0 podman[148089]: 2025-10-08 16:30:37.48930722 +0000 UTC m=+0.095174481 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 08 16:30:38 compute-0 nova_compute[117413]: 2025-10-08 16:30:38.145 2 WARNING neutronclient.v2_0.client [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:30:38 compute-0 unix_chkpwd[148140]: password check failed for user (root)
Oct 08 16:30:38 compute-0 sshd-session[148138]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 08 16:30:38 compute-0 nova_compute[117413]: 2025-10-08 16:30:38.559 2 DEBUG nova.network.neutron [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Updating instance_info_cache with network_info: [{"id": "2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f", "address": "fa:16:3e:14:26:e7", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d5e5ce6-d5", "ovs_interfaceid": "2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.067 2 DEBUG oslo_concurrency.lockutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Releasing lock "refresh_cache-0d360634-7168-4ee5-af98-052f1a3003a3" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.067 2 DEBUG nova.compute.manager [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Instance network_info: |[{"id": "2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f", "address": "fa:16:3e:14:26:e7", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d5e5ce6-d5", "ovs_interfaceid": "2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.070 2 DEBUG nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Start _get_guest_xml network_info=[{"id": "2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f", "address": "fa:16:3e:14:26:e7", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d5e5ce6-d5", "ovs_interfaceid": "2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'boot_index': 0, 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': '44390e9d-4b05-4916-9ba9-97b19c79ef43'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.073 2 WARNING nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.074 2 DEBUG nova.virt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='44390e9d-4b05-4916-9ba9-97b19c79ef43', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-902349060', uuid='0d360634-7168-4ee5-af98-052f1a3003a3'), owner=OwnerMeta(userid='a35a495eee564e31a6dce3a5c601665c', username='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320-project-admin', projectid='621f620ded214ac792354cb32ce3de49', projectname='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320'), image=ImageMeta(id='44390e9d-4b05-4916-9ba9-97b19c79ef43', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='43cd5d45-bd07-4889-a671-dd23291090c1', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f", "address": "fa:16:3e:14:26:e7", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d5e5ce6-d5", "ovs_interfaceid": "2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008114656.23cad1d.el10', creation_time=1759941039.0747907) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.078 2 DEBUG nova.virt.libvirt.host [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.078 2 DEBUG nova.virt.libvirt.host [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.081 2 DEBUG nova.virt.libvirt.host [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.082 2 DEBUG nova.virt.libvirt.host [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.082 2 DEBUG nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.082 2 DEBUG nova.virt.hardware [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-08T16:08:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='43cd5d45-bd07-4889-a671-dd23291090c1',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.083 2 DEBUG nova.virt.hardware [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.083 2 DEBUG nova.virt.hardware [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.083 2 DEBUG nova.virt.hardware [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.083 2 DEBUG nova.virt.hardware [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.083 2 DEBUG nova.virt.hardware [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.084 2 DEBUG nova.virt.hardware [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.084 2 DEBUG nova.virt.hardware [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.084 2 DEBUG nova.virt.hardware [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.084 2 DEBUG nova.virt.hardware [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.084 2 DEBUG nova.virt.hardware [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.087 2 DEBUG nova.virt.libvirt.vif [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:30:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-902349060',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutenoderesourceconsolidationstrategy-server-902',id=19,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='621f620ded214ac792354cb32ce3de49',ramdisk_id='',reservation_id='r-w05ohmnw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320',owner_user_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:30:34Z,user_data=None,user_id='a35a495eee564e31a6dce3a5c601665c',uuid=0d360634-7168-4ee5-af98-052f1a3003a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f", "address": "fa:16:3e:14:26:e7", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d5e5ce6-d5", "ovs_interfaceid": "2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.088 2 DEBUG nova.network.os_vif_util [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Converting VIF {"id": "2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f", "address": "fa:16:3e:14:26:e7", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d5e5ce6-d5", "ovs_interfaceid": "2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.088 2 DEBUG nova.network.os_vif_util [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:26:e7,bridge_name='br-int',has_traffic_filtering=True,id=2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d5e5ce6-d5') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.089 2 DEBUG nova.objects.instance [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0d360634-7168-4ee5-af98-052f1a3003a3 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.620 2 DEBUG nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] End _get_guest_xml xml=<domain type="kvm">
Oct 08 16:30:39 compute-0 nova_compute[117413]:   <uuid>0d360634-7168-4ee5-af98-052f1a3003a3</uuid>
Oct 08 16:30:39 compute-0 nova_compute[117413]:   <name>instance-00000013</name>
Oct 08 16:30:39 compute-0 nova_compute[117413]:   <memory>131072</memory>
Oct 08 16:30:39 compute-0 nova_compute[117413]:   <vcpu>1</vcpu>
Oct 08 16:30:39 compute-0 nova_compute[117413]:   <metadata>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <nova:package version="32.1.0-0.20251008114656.23cad1d.el10"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <nova:name>tempest-TestExecuteNodeResourceConsolidationStrategy-server-902349060</nova:name>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <nova:creationTime>2025-10-08 16:30:39</nova:creationTime>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <nova:flavor name="m1.nano" id="43cd5d45-bd07-4889-a671-dd23291090c1">
Oct 08 16:30:39 compute-0 nova_compute[117413]:         <nova:memory>128</nova:memory>
Oct 08 16:30:39 compute-0 nova_compute[117413]:         <nova:disk>1</nova:disk>
Oct 08 16:30:39 compute-0 nova_compute[117413]:         <nova:swap>0</nova:swap>
Oct 08 16:30:39 compute-0 nova_compute[117413]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 16:30:39 compute-0 nova_compute[117413]:         <nova:vcpus>1</nova:vcpus>
Oct 08 16:30:39 compute-0 nova_compute[117413]:         <nova:extraSpecs>
Oct 08 16:30:39 compute-0 nova_compute[117413]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 08 16:30:39 compute-0 nova_compute[117413]:         </nova:extraSpecs>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       </nova:flavor>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <nova:image uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43">
Oct 08 16:30:39 compute-0 nova_compute[117413]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 08 16:30:39 compute-0 nova_compute[117413]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 08 16:30:39 compute-0 nova_compute[117413]:         <nova:minDisk>1</nova:minDisk>
Oct 08 16:30:39 compute-0 nova_compute[117413]:         <nova:minRam>0</nova:minRam>
Oct 08 16:30:39 compute-0 nova_compute[117413]:         <nova:properties>
Oct 08 16:30:39 compute-0 nova_compute[117413]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 08 16:30:39 compute-0 nova_compute[117413]:         </nova:properties>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       </nova:image>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <nova:owner>
Oct 08 16:30:39 compute-0 nova_compute[117413]:         <nova:user uuid="a35a495eee564e31a6dce3a5c601665c">tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320-project-admin</nova:user>
Oct 08 16:30:39 compute-0 nova_compute[117413]:         <nova:project uuid="621f620ded214ac792354cb32ce3de49">tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320</nova:project>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       </nova:owner>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <nova:root type="image" uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <nova:ports>
Oct 08 16:30:39 compute-0 nova_compute[117413]:         <nova:port uuid="2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f">
Oct 08 16:30:39 compute-0 nova_compute[117413]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:         </nova:port>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       </nova:ports>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     </nova:instance>
Oct 08 16:30:39 compute-0 nova_compute[117413]:   </metadata>
Oct 08 16:30:39 compute-0 nova_compute[117413]:   <sysinfo type="smbios">
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <system>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <entry name="manufacturer">RDO</entry>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <entry name="product">OpenStack Compute</entry>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <entry name="version">32.1.0-0.20251008114656.23cad1d.el10</entry>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <entry name="serial">0d360634-7168-4ee5-af98-052f1a3003a3</entry>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <entry name="uuid">0d360634-7168-4ee5-af98-052f1a3003a3</entry>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <entry name="family">Virtual Machine</entry>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     </system>
Oct 08 16:30:39 compute-0 nova_compute[117413]:   </sysinfo>
Oct 08 16:30:39 compute-0 nova_compute[117413]:   <os>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <boot dev="hd"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <smbios mode="sysinfo"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:   </os>
Oct 08 16:30:39 compute-0 nova_compute[117413]:   <features>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <acpi/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <apic/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <vmcoreinfo/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:   </features>
Oct 08 16:30:39 compute-0 nova_compute[117413]:   <clock offset="utc">
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <timer name="hpet" present="no"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:   </clock>
Oct 08 16:30:39 compute-0 nova_compute[117413]:   <cpu mode="host-model" match="exact">
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <topology sockets="1" cores="1" threads="1"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:30:39 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <disk type="file" device="disk">
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3/disk"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <target dev="vda" bus="virtio"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <disk type="file" device="cdrom">
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <driver name="qemu" type="raw" cache="none"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3/disk.config"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <target dev="sda" bus="sata"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <interface type="ethernet">
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <mac address="fa:16:3e:14:26:e7"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <driver name="vhost" rx_queue_size="512"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <mtu size="1442"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <target dev="tap2d5e5ce6-d5"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     </interface>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <serial type="pty">
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3/console.log" append="off"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     </serial>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <video>
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     </video>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <input type="tablet" bus="usb"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <rng model="virtio">
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <backend model="random">/dev/urandom</backend>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <controller type="usb" index="0"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 08 16:30:39 compute-0 nova_compute[117413]:       <stats period="10"/>
Oct 08 16:30:39 compute-0 nova_compute[117413]:     </memballoon>
Oct 08 16:30:39 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:30:39 compute-0 nova_compute[117413]: </domain>
Oct 08 16:30:39 compute-0 nova_compute[117413]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.621 2 DEBUG nova.compute.manager [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Preparing to wait for external event network-vif-plugged-2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.621 2 DEBUG oslo_concurrency.lockutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "0d360634-7168-4ee5-af98-052f1a3003a3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.621 2 DEBUG oslo_concurrency.lockutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "0d360634-7168-4ee5-af98-052f1a3003a3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.622 2 DEBUG oslo_concurrency.lockutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "0d360634-7168-4ee5-af98-052f1a3003a3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.622 2 DEBUG nova.virt.libvirt.vif [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:30:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-902349060',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutenoderesourceconsolidationstrategy-server-902',id=19,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='621f620ded214ac792354cb32ce3de49',ramdisk_id='',reservation_id='r-w05ohmnw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320',owner_user_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:30:34Z,user_data=None,user_id='a35a495eee564e31a6dce3a5c601665c',uuid=0d360634-7168-4ee5-af98-052f1a3003a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f", "address": "fa:16:3e:14:26:e7", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d5e5ce6-d5", "ovs_interfaceid": "2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.622 2 DEBUG nova.network.os_vif_util [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Converting VIF {"id": "2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f", "address": "fa:16:3e:14:26:e7", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d5e5ce6-d5", "ovs_interfaceid": "2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.623 2 DEBUG nova.network.os_vif_util [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:26:e7,bridge_name='br-int',has_traffic_filtering=True,id=2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d5e5ce6-d5') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.623 2 DEBUG os_vif [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:26:e7,bridge_name='br-int',has_traffic_filtering=True,id=2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d5e5ce6-d5') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.624 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.624 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.625 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '3557313e-3028-5be2-9622-ac5baada6750', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.632 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d5e5ce6-d5, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.632 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap2d5e5ce6-d5, col_values=(('qos', UUID('6cee7474-5a33-4e74-818c-535b9bea510e')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.632 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap2d5e5ce6-d5, col_values=(('external_ids', {'iface-id': '2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:26:e7', 'vm-uuid': '0d360634-7168-4ee5-af98-052f1a3003a3'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:39 compute-0 NetworkManager[1034]: <info>  [1759941039.6348] manager: (tap2d5e5ce6-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:39 compute-0 nova_compute[117413]: 2025-10-08 16:30:39.642 2 INFO os_vif [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:26:e7,bridge_name='br-int',has_traffic_filtering=True,id=2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d5e5ce6-d5')
Oct 08 16:30:39 compute-0 sshd-session[148138]: Failed password for root from 193.46.255.244 port 43700 ssh2
Oct 08 16:30:40 compute-0 unix_chkpwd[148143]: password check failed for user (root)
Oct 08 16:30:41 compute-0 nova_compute[117413]: 2025-10-08 16:30:41.178 2 DEBUG nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:30:41 compute-0 nova_compute[117413]: 2025-10-08 16:30:41.179 2 DEBUG nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:30:41 compute-0 nova_compute[117413]: 2025-10-08 16:30:41.179 2 DEBUG nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] No VIF found with MAC fa:16:3e:14:26:e7, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 08 16:30:41 compute-0 nova_compute[117413]: 2025-10-08 16:30:41.179 2 INFO nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Using config drive
Oct 08 16:30:41 compute-0 nova_compute[117413]: 2025-10-08 16:30:41.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:41 compute-0 nova_compute[117413]: 2025-10-08 16:30:41.693 2 WARNING neutronclient.v2_0.client [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:30:41 compute-0 nova_compute[117413]: 2025-10-08 16:30:41.869 2 INFO nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Creating config drive at /var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3/disk.config
Oct 08 16:30:41 compute-0 nova_compute[117413]: 2025-10-08 16:30:41.874 2 DEBUG oslo_concurrency.processutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmpjvdgx4wu execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:30:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:41.913 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:30:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:41.914 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:30:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:41.914 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:30:41 compute-0 nova_compute[117413]: 2025-10-08 16:30:41.998 2 DEBUG oslo_concurrency.processutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmpjvdgx4wu" returned: 0 in 0.123s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:30:42 compute-0 kernel: tap2d5e5ce6-d5: entered promiscuous mode
Oct 08 16:30:42 compute-0 NetworkManager[1034]: <info>  [1759941042.0831] manager: (tap2d5e5ce6-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Oct 08 16:30:42 compute-0 ovn_controller[19768]: 2025-10-08T16:30:42Z|00152|binding|INFO|Claiming lport 2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f for this chassis.
Oct 08 16:30:42 compute-0 ovn_controller[19768]: 2025-10-08T16:30:42Z|00153|binding|INFO|2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f: Claiming fa:16:3e:14:26:e7 10.100.0.11
Oct 08 16:30:42 compute-0 nova_compute[117413]: 2025-10-08 16:30:42.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.093 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:26:e7 10.100.0.11'], port_security=['fa:16:3e:14:26:e7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0d360634-7168-4ee5-af98-052f1a3003a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '621f620ded214ac792354cb32ce3de49', 'neutron:revision_number': '4', 'neutron:security_group_ids': '44bcc6c4-216d-4445-a813-dacde8875d66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a57d6524-8805-463f-b41a-d3218c332981, chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.094 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f in datapath eaa04398-576e-4a18-a2fe-a6a0b2d52eea bound to our chassis
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.096 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eaa04398-576e-4a18-a2fe-a6a0b2d52eea
Oct 08 16:30:42 compute-0 ovn_controller[19768]: 2025-10-08T16:30:42Z|00154|binding|INFO|Setting lport 2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f ovn-installed in OVS
Oct 08 16:30:42 compute-0 ovn_controller[19768]: 2025-10-08T16:30:42Z|00155|binding|INFO|Setting lport 2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f up in Southbound
Oct 08 16:30:42 compute-0 nova_compute[117413]: 2025-10-08 16:30:42.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.110 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[d8bff73c-3251-4b2c-a709-6ad2f75e14ff]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.111 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeaa04398-51 in ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.112 139805 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeaa04398-50 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.112 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[c696116e-ba9b-46a0-ba06-edb7f3207a17]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.113 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe83098-57b4-4f69-bb3a-9bc65dfee8ab]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:30:42 compute-0 systemd-machined[77548]: New machine qemu-14-instance-00000013.
Oct 08 16:30:42 compute-0 systemd-udevd[148166]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.125 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[ae86a4f5-27a5-403a-8e0d-04788911744e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.131 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[29b6ad58-9901-4e25-8d0f-88f3eb18637a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:30:42 compute-0 NetworkManager[1034]: <info>  [1759941042.1406] device (tap2d5e5ce6-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:30:42 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-00000013.
Oct 08 16:30:42 compute-0 NetworkManager[1034]: <info>  [1759941042.1417] device (tap2d5e5ce6-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.172 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[4186b1db-47b9-497f-a5f8-e19e739acf88]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.176 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[b0b49d4b-587c-42bb-8da0-58cee0516f3a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:30:42 compute-0 NetworkManager[1034]: <info>  [1759941042.1783] manager: (tapeaa04398-50): new Veth device (/org/freedesktop/NetworkManager/Devices/62)
Oct 08 16:30:42 compute-0 systemd-udevd[148170]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.215 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[2e9d8254-f4fa-4a73-b3db-0e2335e77fc3]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.218 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[fe53fa0c-442b-4b8b-b08f-5aa9a67dd979]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:30:42 compute-0 nova_compute[117413]: 2025-10-08 16:30:42.230 2 DEBUG nova.compute.manager [req-13757b71-6963-4c3d-a409-ce30cf71e676 req-a8ddfe5d-eea3-43be-8139-d29df7f29474 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Received event network-vif-plugged-2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:30:42 compute-0 nova_compute[117413]: 2025-10-08 16:30:42.231 2 DEBUG oslo_concurrency.lockutils [req-13757b71-6963-4c3d-a409-ce30cf71e676 req-a8ddfe5d-eea3-43be-8139-d29df7f29474 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "0d360634-7168-4ee5-af98-052f1a3003a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:30:42 compute-0 nova_compute[117413]: 2025-10-08 16:30:42.231 2 DEBUG oslo_concurrency.lockutils [req-13757b71-6963-4c3d-a409-ce30cf71e676 req-a8ddfe5d-eea3-43be-8139-d29df7f29474 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0d360634-7168-4ee5-af98-052f1a3003a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:30:42 compute-0 nova_compute[117413]: 2025-10-08 16:30:42.231 2 DEBUG oslo_concurrency.lockutils [req-13757b71-6963-4c3d-a409-ce30cf71e676 req-a8ddfe5d-eea3-43be-8139-d29df7f29474 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0d360634-7168-4ee5-af98-052f1a3003a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:30:42 compute-0 nova_compute[117413]: 2025-10-08 16:30:42.231 2 DEBUG nova.compute.manager [req-13757b71-6963-4c3d-a409-ce30cf71e676 req-a8ddfe5d-eea3-43be-8139-d29df7f29474 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Processing event network-vif-plugged-2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 08 16:30:42 compute-0 NetworkManager[1034]: <info>  [1759941042.2454] device (tapeaa04398-50): carrier: link connected
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.254 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[13051128-b5ae-4485-a484-27621f5e60d4]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.274 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[da144434-3551-4c91-8194-88a0ccd69ce1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeaa04398-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:0e:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 233093, 'reachable_time': 16299, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 148197, 'error': None, 'target': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.295 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0e3dad-9947-4b01-bb0b-bb195d47cd91]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecb:e1e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 233093, 'tstamp': 233093}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 148198, 'error': None, 'target': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.317 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[f1c946f2-a30a-4944-898e-68cc855188c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeaa04398-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:0e:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 233093, 'reachable_time': 16299, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 148199, 'error': None, 'target': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.357 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[2416cacc-4b44-4970-8c90-20ae694c3855]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.424 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[a52992e3-7ec9-4f22-a926-7c33949072a1]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.426 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeaa04398-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.426 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.426 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeaa04398-50, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:30:42 compute-0 nova_compute[117413]: 2025-10-08 16:30:42.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:42 compute-0 kernel: tapeaa04398-50: entered promiscuous mode
Oct 08 16:30:42 compute-0 NetworkManager[1034]: <info>  [1759941042.4303] manager: (tapeaa04398-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Oct 08 16:30:42 compute-0 nova_compute[117413]: 2025-10-08 16:30:42.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.433 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeaa04398-50, col_values=(('external_ids', {'iface-id': '4ea2ba2c-a72c-4801-bc39-9e3f03f24d1c'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:30:42 compute-0 nova_compute[117413]: 2025-10-08 16:30:42.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:42 compute-0 ovn_controller[19768]: 2025-10-08T16:30:42Z|00156|binding|INFO|Releasing lport 4ea2ba2c-a72c-4801-bc39-9e3f03f24d1c from this chassis (sb_readonly=0)
Oct 08 16:30:42 compute-0 nova_compute[117413]: 2025-10-08 16:30:42.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.438 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[4702feb1-8ec0-44d3-bc4a-0110d73dc544]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.440 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.440 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.440 28633 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for eaa04398-576e-4a18-a2fe-a6a0b2d52eea disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.440 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.441 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[1154031f-47c9-4b04-a145-890bbb75c143]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.441 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.442 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ed24b7-3fad-4553-8ff4-149d3f111489]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.443 28633 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: global
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     log         /dev/log local0 debug
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     log-tag     haproxy-metadata-proxy-eaa04398-576e-4a18-a2fe-a6a0b2d52eea
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     user        root
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     group       root
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     maxconn     1024
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     pidfile     /var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     daemon
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: defaults
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     log global
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     mode http
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     option httplog
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     option dontlognull
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     option http-server-close
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     option forwardfor
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     retries                 3
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     timeout http-request    30s
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     timeout connect         30s
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     timeout client          32s
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     timeout server          32s
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     timeout http-keep-alive 30s
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: listen listener
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     bind 169.254.169.254:80
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     server metadata /var/lib/neutron/metadata_proxy
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:     http-request add-header X-OVN-Network-ID eaa04398-576e-4a18-a2fe-a6a0b2d52eea
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 08 16:30:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:30:42.443 28633 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'env', 'PROCESS_TAG=haproxy-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 08 16:30:42 compute-0 nova_compute[117413]: 2025-10-08 16:30:42.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:42 compute-0 sshd-session[148138]: Failed password for root from 193.46.255.244 port 43700 ssh2
Oct 08 16:30:42 compute-0 nova_compute[117413]: 2025-10-08 16:30:42.833 2 DEBUG nova.compute.manager [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 08 16:30:42 compute-0 nova_compute[117413]: 2025-10-08 16:30:42.837 2 DEBUG nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 08 16:30:42 compute-0 nova_compute[117413]: 2025-10-08 16:30:42.842 2 INFO nova.virt.libvirt.driver [-] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Instance spawned successfully.
Oct 08 16:30:42 compute-0 nova_compute[117413]: 2025-10-08 16:30:42.843 2 DEBUG nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 08 16:30:42 compute-0 podman[148237]: 2025-10-08 16:30:42.84583955 +0000 UTC m=+0.061715617 container create 503ed678e124233304c320c60fc36c2fafd977ddce17db278837459370f75811 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 08 16:30:42 compute-0 systemd[1]: Started libpod-conmon-503ed678e124233304c320c60fc36c2fafd977ddce17db278837459370f75811.scope.
Oct 08 16:30:42 compute-0 podman[148237]: 2025-10-08 16:30:42.811991706 +0000 UTC m=+0.027867793 image pull 1b705be0a2473f9551d4f3571c1e8fc1b0bd84e013684239de53078e70a4b6e3 38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 08 16:30:42 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:30:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6c74842704733934ec4d2a0902546b5a05b11e1ec472978b4d0c62dfa954be/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 08 16:30:42 compute-0 podman[148237]: 2025-10-08 16:30:42.963448875 +0000 UTC m=+0.179324982 container init 503ed678e124233304c320c60fc36c2fafd977ddce17db278837459370f75811 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Oct 08 16:30:42 compute-0 podman[148237]: 2025-10-08 16:30:42.973285018 +0000 UTC m=+0.189161115 container start 503ed678e124233304c320c60fc36c2fafd977ddce17db278837459370f75811 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Oct 08 16:30:43 compute-0 neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea[148252]: [NOTICE]   (148256) : New worker (148258) forked
Oct 08 16:30:43 compute-0 neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea[148252]: [NOTICE]   (148256) : Loading success.
Oct 08 16:30:43 compute-0 nova_compute[117413]: 2025-10-08 16:30:43.362 2 DEBUG nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:30:43 compute-0 nova_compute[117413]: 2025-10-08 16:30:43.363 2 DEBUG nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:30:43 compute-0 nova_compute[117413]: 2025-10-08 16:30:43.364 2 DEBUG nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:30:43 compute-0 nova_compute[117413]: 2025-10-08 16:30:43.366 2 DEBUG nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:30:43 compute-0 nova_compute[117413]: 2025-10-08 16:30:43.367 2 DEBUG nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:30:43 compute-0 nova_compute[117413]: 2025-10-08 16:30:43.368 2 DEBUG nova.virt.libvirt.driver [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:30:43 compute-0 nova_compute[117413]: 2025-10-08 16:30:43.881 2 INFO nova.compute.manager [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Took 8.67 seconds to spawn the instance on the hypervisor.
Oct 08 16:30:43 compute-0 nova_compute[117413]: 2025-10-08 16:30:43.882 2 DEBUG nova.compute.manager [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:30:44 compute-0 unix_chkpwd[148267]: password check failed for user (root)
Oct 08 16:30:44 compute-0 nova_compute[117413]: 2025-10-08 16:30:44.283 2 DEBUG nova.compute.manager [req-6dd073a3-7e1a-4390-8529-149a4c0f8f9c req-bbf1f999-c835-4029-8ced-b5fd7e0e6d24 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Received event network-vif-plugged-2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:30:44 compute-0 nova_compute[117413]: 2025-10-08 16:30:44.285 2 DEBUG oslo_concurrency.lockutils [req-6dd073a3-7e1a-4390-8529-149a4c0f8f9c req-bbf1f999-c835-4029-8ced-b5fd7e0e6d24 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "0d360634-7168-4ee5-af98-052f1a3003a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:30:44 compute-0 nova_compute[117413]: 2025-10-08 16:30:44.285 2 DEBUG oslo_concurrency.lockutils [req-6dd073a3-7e1a-4390-8529-149a4c0f8f9c req-bbf1f999-c835-4029-8ced-b5fd7e0e6d24 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0d360634-7168-4ee5-af98-052f1a3003a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:30:44 compute-0 nova_compute[117413]: 2025-10-08 16:30:44.286 2 DEBUG oslo_concurrency.lockutils [req-6dd073a3-7e1a-4390-8529-149a4c0f8f9c req-bbf1f999-c835-4029-8ced-b5fd7e0e6d24 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0d360634-7168-4ee5-af98-052f1a3003a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:30:44 compute-0 nova_compute[117413]: 2025-10-08 16:30:44.286 2 DEBUG nova.compute.manager [req-6dd073a3-7e1a-4390-8529-149a4c0f8f9c req-bbf1f999-c835-4029-8ced-b5fd7e0e6d24 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] No waiting events found dispatching network-vif-plugged-2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:30:44 compute-0 nova_compute[117413]: 2025-10-08 16:30:44.287 2 WARNING nova.compute.manager [req-6dd073a3-7e1a-4390-8529-149a4c0f8f9c req-bbf1f999-c835-4029-8ced-b5fd7e0e6d24 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Received unexpected event network-vif-plugged-2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f for instance with vm_state active and task_state None.
Oct 08 16:30:44 compute-0 nova_compute[117413]: 2025-10-08 16:30:44.419 2 INFO nova.compute.manager [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Took 13.87 seconds to build instance.
Oct 08 16:30:44 compute-0 nova_compute[117413]: 2025-10-08 16:30:44.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:44 compute-0 nova_compute[117413]: 2025-10-08 16:30:44.927 2 DEBUG oslo_concurrency.lockutils [None req-acb0f37c-b30a-41d9-a0e4-dab5c49c06e8 a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "0d360634-7168-4ee5-af98-052f1a3003a3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.389s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:30:46 compute-0 sshd-session[148138]: Failed password for root from 193.46.255.244 port 43700 ssh2
Oct 08 16:30:46 compute-0 nova_compute[117413]: 2025-10-08 16:30:46.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:47 compute-0 sshd-session[148138]: Received disconnect from 193.46.255.244 port 43700:11:  [preauth]
Oct 08 16:30:47 compute-0 sshd-session[148138]: Disconnected from authenticating user root 193.46.255.244 port 43700 [preauth]
Oct 08 16:30:47 compute-0 sshd-session[148138]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 08 16:30:48 compute-0 podman[148270]: 2025-10-08 16:30:48.493261421 +0000 UTC m=+0.090566337 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Oct 08 16:30:48 compute-0 unix_chkpwd[148292]: password check failed for user (root)
Oct 08 16:30:48 compute-0 sshd-session[148268]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 08 16:30:49 compute-0 nova_compute[117413]: 2025-10-08 16:30:49.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:51 compute-0 sshd-session[148268]: Failed password for root from 193.46.255.244 port 20798 ssh2
Oct 08 16:30:51 compute-0 nova_compute[117413]: 2025-10-08 16:30:51.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:52 compute-0 unix_chkpwd[148293]: password check failed for user (root)
Oct 08 16:30:53 compute-0 podman[148294]: 2025-10-08 16:30:53.506842877 +0000 UTC m=+0.100060683 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9)
Oct 08 16:30:54 compute-0 nova_compute[117413]: 2025-10-08 16:30:54.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:55 compute-0 sshd-session[148268]: Failed password for root from 193.46.255.244 port 20798 ssh2
Oct 08 16:30:55 compute-0 ovn_controller[19768]: 2025-10-08T16:30:55Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:14:26:e7 10.100.0.11
Oct 08 16:30:55 compute-0 ovn_controller[19768]: 2025-10-08T16:30:55Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:14:26:e7 10.100.0.11
Oct 08 16:30:56 compute-0 nova_compute[117413]: 2025-10-08 16:30:56.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:56 compute-0 unix_chkpwd[148327]: password check failed for user (root)
Oct 08 16:30:58 compute-0 nova_compute[117413]: 2025-10-08 16:30:58.191 2 DEBUG nova.virt.libvirt.driver [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Creating tmpfile /var/lib/nova/instances/tmpvodbbal2 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10944
Oct 08 16:30:58 compute-0 nova_compute[117413]: 2025-10-08 16:30:58.192 2 WARNING neutronclient.v2_0.client [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:30:58 compute-0 nova_compute[117413]: 2025-10-08 16:30:58.261 2 DEBUG nova.compute.manager [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvodbbal2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.12/site-packages/nova/compute/manager.py:9086
Oct 08 16:30:58 compute-0 sshd-session[148268]: Failed password for root from 193.46.255.244 port 20798 ssh2
Oct 08 16:30:59 compute-0 podman[148329]: 2025-10-08 16:30:59.491011606 +0000 UTC m=+0.080508411 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 16:30:59 compute-0 nova_compute[117413]: 2025-10-08 16:30:59.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:30:59 compute-0 podman[127881]: time="2025-10-08T16:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:30:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:30:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3484 "" "Go-http-client/1.1"
Oct 08 16:31:00 compute-0 nova_compute[117413]: 2025-10-08 16:31:00.297 2 WARNING neutronclient.v2_0.client [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:31:00 compute-0 sshd-session[148268]: Received disconnect from 193.46.255.244 port 20798:11:  [preauth]
Oct 08 16:31:00 compute-0 sshd-session[148268]: Disconnected from authenticating user root 193.46.255.244 port 20798 [preauth]
Oct 08 16:31:00 compute-0 sshd-session[148268]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.244  user=root
Oct 08 16:31:01 compute-0 nova_compute[117413]: 2025-10-08 16:31:01.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:01 compute-0 openstack_network_exporter[130039]: ERROR   16:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:31:01 compute-0 openstack_network_exporter[130039]: ERROR   16:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:31:01 compute-0 openstack_network_exporter[130039]: ERROR   16:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:31:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:31:01 compute-0 openstack_network_exporter[130039]: ERROR   16:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:31:01 compute-0 openstack_network_exporter[130039]: ERROR   16:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:31:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:31:02 compute-0 podman[148351]: 2025-10-08 16:31:02.47402642 +0000 UTC m=+0.065916853 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS)
Oct 08 16:31:04 compute-0 nova_compute[117413]: 2025-10-08 16:31:04.118 2 DEBUG nova.compute.manager [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvodbbal2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='a35118d1-edb1-4920-8978-2712410057a1',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9311
Oct 08 16:31:04 compute-0 nova_compute[117413]: 2025-10-08 16:31:04.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:04 compute-0 nova_compute[117413]: 2025-10-08 16:31:04.866 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:31:05 compute-0 nova_compute[117413]: 2025-10-08 16:31:05.139 2 DEBUG oslo_concurrency.lockutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-a35118d1-edb1-4920-8978-2712410057a1" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:31:05 compute-0 nova_compute[117413]: 2025-10-08 16:31:05.140 2 DEBUG oslo_concurrency.lockutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-a35118d1-edb1-4920-8978-2712410057a1" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:31:05 compute-0 nova_compute[117413]: 2025-10-08 16:31:05.140 2 DEBUG nova.network.neutron [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:31:05 compute-0 nova_compute[117413]: 2025-10-08 16:31:05.646 2 WARNING neutronclient.v2_0.client [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:31:06 compute-0 nova_compute[117413]: 2025-10-08 16:31:06.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:06 compute-0 nova_compute[117413]: 2025-10-08 16:31:06.375 2 WARNING neutronclient.v2_0.client [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:31:06 compute-0 nova_compute[117413]: 2025-10-08 16:31:06.541 2 DEBUG nova.network.neutron [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Updating instance_info_cache with network_info: [{"id": "a49245b4-aeef-48fb-9de7-cb4496fa9385", "address": "fa:16:3e:c5:bb:fd", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49245b4-ae", "ovs_interfaceid": "a49245b4-aeef-48fb-9de7-cb4496fa9385", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.048 2 DEBUG oslo_concurrency.lockutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-a35118d1-edb1-4920-8978-2712410057a1" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.062 2 DEBUG nova.virt.libvirt.driver [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvodbbal2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='a35118d1-edb1-4920-8978-2712410057a1',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11737
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.063 2 DEBUG nova.virt.libvirt.driver [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Creating instance directory: /var/lib/nova/instances/a35118d1-edb1-4920-8978-2712410057a1 pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11750
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.063 2 DEBUG nova.virt.libvirt.driver [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Creating disk.info with the contents: {'/var/lib/nova/instances/a35118d1-edb1-4920-8978-2712410057a1/disk': 'qcow2', '/var/lib/nova/instances/a35118d1-edb1-4920-8978-2712410057a1/disk.config': 'raw'} pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11764
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.064 2 DEBUG nova.virt.libvirt.driver [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11774
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.065 2 DEBUG nova.objects.instance [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a35118d1-edb1-4920-8978-2712410057a1 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.572 2 DEBUG oslo_utils.imageutils.format_inspector [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.576 2 DEBUG oslo_utils.imageutils.format_inspector [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.578 2 DEBUG oslo_concurrency.processutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.671 2 DEBUG oslo_concurrency.processutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.673 2 DEBUG oslo_concurrency.lockutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.673 2 DEBUG oslo_concurrency.lockutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.674 2 DEBUG oslo_utils.imageutils.format_inspector [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.678 2 DEBUG oslo_utils.imageutils.format_inspector [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.678 2 DEBUG oslo_concurrency.processutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.748 2 DEBUG oslo_concurrency.processutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.749 2 DEBUG oslo_concurrency.processutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/a35118d1-edb1-4920-8978-2712410057a1/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.785 2 DEBUG oslo_concurrency.processutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/a35118d1-edb1-4920-8978-2712410057a1/disk 1073741824" returned: 0 in 0.036s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.786 2 DEBUG oslo_concurrency.lockutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.113s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.787 2 DEBUG oslo_concurrency.processutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.856 2 DEBUG oslo_concurrency.processutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.857 2 DEBUG nova.virt.disk.api [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Checking if we can resize image /var/lib/nova/instances/a35118d1-edb1-4920-8978-2712410057a1/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.857 2 DEBUG oslo_concurrency.processutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a35118d1-edb1-4920-8978-2712410057a1/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.924 2 DEBUG oslo_concurrency.processutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a35118d1-edb1-4920-8978-2712410057a1/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.925 2 DEBUG nova.virt.disk.api [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Cannot resize image /var/lib/nova/instances/a35118d1-edb1-4920-8978-2712410057a1/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:31:07 compute-0 nova_compute[117413]: 2025-10-08 16:31:07.926 2 DEBUG nova.objects.instance [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'migration_context' on Instance uuid a35118d1-edb1-4920-8978-2712410057a1 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.432 2 DEBUG nova.objects.base [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Object Instance<a35118d1-edb1-4920-8978-2712410057a1> lazy-loaded attributes: trusted_certs,migration_context wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.433 2 DEBUG oslo_concurrency.processutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/a35118d1-edb1-4920-8978-2712410057a1/disk.config 497664 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:31:08 compute-0 podman[148386]: 2025-10-08 16:31:08.460674052 +0000 UTC m=+0.063181814 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.464 2 DEBUG oslo_concurrency.processutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/a35118d1-edb1-4920-8978-2712410057a1/disk.config 497664" returned: 0 in 0.032s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.466 2 DEBUG nova.virt.libvirt.driver [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11704
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.467 2 DEBUG nova.virt.libvirt.vif [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-08T16:30:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-713322440',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutenoderesourceconsolidationstrategy-server-713',id=18,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:30:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='621f620ded214ac792354cb32ce3de49',ramdisk_id='',reservation_id='r-5t77m6gu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320',owner_user_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:30:25Z,user_data=None,user_id='a35a495eee564e31a6dce3a5c601665c',uuid=a35118d1-edb1-4920-8978-2712410057a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a49245b4-aeef-48fb-9de7-cb4496fa9385", "address": "fa:16:3e:c5:bb:fd", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapa49245b4-ae", "ovs_interfaceid": "a49245b4-aeef-48fb-9de7-cb4496fa9385", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.467 2 DEBUG nova.network.os_vif_util [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converting VIF {"id": "a49245b4-aeef-48fb-9de7-cb4496fa9385", "address": "fa:16:3e:c5:bb:fd", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapa49245b4-ae", "ovs_interfaceid": "a49245b4-aeef-48fb-9de7-cb4496fa9385", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.468 2 DEBUG nova.network.os_vif_util [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:bb:fd,bridge_name='br-int',has_traffic_filtering=True,id=a49245b4-aeef-48fb-9de7-cb4496fa9385,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49245b4-ae') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.469 2 DEBUG os_vif [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:bb:fd,bridge_name='br-int',has_traffic_filtering=True,id=a49245b4-aeef-48fb-9de7-cb4496fa9385,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49245b4-ae') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.470 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.470 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.471 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'c89a23ca-c942-5b1a-9d2d-f15ee62f8aab', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.476 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa49245b4-ae, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.476 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapa49245b4-ae, col_values=(('qos', UUID('ee814532-e9a2-4448-a6dd-a0c8347be1f3')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.476 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapa49245b4-ae, col_values=(('external_ids', {'iface-id': 'a49245b4-aeef-48fb-9de7-cb4496fa9385', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c5:bb:fd', 'vm-uuid': 'a35118d1-edb1-4920-8978-2712410057a1'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:08 compute-0 NetworkManager[1034]: <info>  [1759941068.4786] manager: (tapa49245b4-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.485 2 INFO os_vif [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:bb:fd,bridge_name='br-int',has_traffic_filtering=True,id=a49245b4-aeef-48fb-9de7-cb4496fa9385,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49245b4-ae')
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.486 2 DEBUG nova.virt.libvirt.driver [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11851
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.486 2 DEBUG nova.compute.manager [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvodbbal2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='a35118d1-edb1-4920-8978-2712410057a1',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9377
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.487 2 WARNING neutronclient.v2_0.client [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:31:08 compute-0 podman[148387]: 2025-10-08 16:31:08.48776178 +0000 UTC m=+0.095998566 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, config_id=ovn_controller, org.label-schema.license=GPLv2)
Oct 08 16:31:08 compute-0 nova_compute[117413]: 2025-10-08 16:31:08.833 2 WARNING neutronclient.v2_0.client [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:31:09 compute-0 nova_compute[117413]: 2025-10-08 16:31:09.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:31:09 compute-0 nova_compute[117413]: 2025-10-08 16:31:09.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:31:09 compute-0 nova_compute[117413]: 2025-10-08 16:31:09.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:31:09 compute-0 nova_compute[117413]: 2025-10-08 16:31:09.404 2 DEBUG nova.network.neutron [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Port a49245b4-aeef-48fb-9de7-cb4496fa9385 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.12/site-packages/nova/network/neutron.py:356
Oct 08 16:31:09 compute-0 nova_compute[117413]: 2025-10-08 16:31:09.415 2 DEBUG nova.compute.manager [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvodbbal2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='a35118d1-edb1-4920-8978-2712410057a1',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9443
Oct 08 16:31:09 compute-0 nova_compute[117413]: 2025-10-08 16:31:09.877 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:31:09 compute-0 nova_compute[117413]: 2025-10-08 16:31:09.877 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:31:09 compute-0 nova_compute[117413]: 2025-10-08 16:31:09.878 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:31:09 compute-0 nova_compute[117413]: 2025-10-08 16:31:09.878 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:31:10 compute-0 nova_compute[117413]: 2025-10-08 16:31:10.912 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:31:10 compute-0 nova_compute[117413]: 2025-10-08 16:31:10.972 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:31:10 compute-0 nova_compute[117413]: 2025-10-08 16:31:10.973 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:31:11 compute-0 nova_compute[117413]: 2025-10-08 16:31:11.039 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:31:11 compute-0 nova_compute[117413]: 2025-10-08 16:31:11.180 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:31:11 compute-0 nova_compute[117413]: 2025-10-08 16:31:11.181 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:31:11 compute-0 nova_compute[117413]: 2025-10-08 16:31:11.206 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.024s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:31:11 compute-0 nova_compute[117413]: 2025-10-08 16:31:11.206 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5963MB free_disk=73.22064590454102GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:31:11 compute-0 nova_compute[117413]: 2025-10-08 16:31:11.207 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:31:11 compute-0 nova_compute[117413]: 2025-10-08 16:31:11.207 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:31:11 compute-0 nova_compute[117413]: 2025-10-08 16:31:11.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:12 compute-0 ovn_controller[19768]: 2025-10-08T16:31:12Z|00157|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Oct 08 16:31:12 compute-0 nova_compute[117413]: 2025-10-08 16:31:12.230 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Migration for instance a35118d1-edb1-4920-8978-2712410057a1 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 08 16:31:12 compute-0 nova_compute[117413]: 2025-10-08 16:31:12.737 2 INFO nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] [instance: a35118d1-edb1-4920-8978-2712410057a1] Updating resource usage from migration bae61738-d0b5-4aef-9809-2f296302cd39
Oct 08 16:31:12 compute-0 nova_compute[117413]: 2025-10-08 16:31:12.738 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] [instance: a35118d1-edb1-4920-8978-2712410057a1] Starting to track incoming migration bae61738-d0b5-4aef-9809-2f296302cd39 with flavor 43cd5d45-bd07-4889-a671-dd23291090c1 _update_usage_from_migration /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1536
Oct 08 16:31:12 compute-0 kernel: tapa49245b4-ae: entered promiscuous mode
Oct 08 16:31:12 compute-0 NetworkManager[1034]: <info>  [1759941072.8344] manager: (tapa49245b4-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Oct 08 16:31:12 compute-0 ovn_controller[19768]: 2025-10-08T16:31:12Z|00158|binding|INFO|Claiming lport a49245b4-aeef-48fb-9de7-cb4496fa9385 for this additional chassis.
Oct 08 16:31:12 compute-0 ovn_controller[19768]: 2025-10-08T16:31:12Z|00159|binding|INFO|a49245b4-aeef-48fb-9de7-cb4496fa9385: Claiming fa:16:3e:c5:bb:fd 10.100.0.9
Oct 08 16:31:12 compute-0 nova_compute[117413]: 2025-10-08 16:31:12.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:12.847 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:bb:fd 10.100.0.9'], port_security=['fa:16:3e:c5:bb:fd 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a35118d1-edb1-4920-8978-2712410057a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '621f620ded214ac792354cb32ce3de49', 'neutron:revision_number': '10', 'neutron:security_group_ids': '44bcc6c4-216d-4445-a813-dacde8875d66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a57d6524-8805-463f-b41a-d3218c332981, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=a49245b4-aeef-48fb-9de7-cb4496fa9385) old=Port_Binding(additional_chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:31:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:12.847 28633 INFO neutron.agent.ovn.metadata.agent [-] Port a49245b4-aeef-48fb-9de7-cb4496fa9385 in datapath eaa04398-576e-4a18-a2fe-a6a0b2d52eea unbound from our chassis
Oct 08 16:31:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:12.848 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eaa04398-576e-4a18-a2fe-a6a0b2d52eea
Oct 08 16:31:12 compute-0 nova_compute[117413]: 2025-10-08 16:31:12.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:12 compute-0 ovn_controller[19768]: 2025-10-08T16:31:12Z|00160|binding|INFO|Setting lport a49245b4-aeef-48fb-9de7-cb4496fa9385 ovn-installed in OVS
Oct 08 16:31:12 compute-0 nova_compute[117413]: 2025-10-08 16:31:12.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:12 compute-0 nova_compute[117413]: 2025-10-08 16:31:12.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:12 compute-0 systemd-udevd[148463]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:31:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:12.867 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[ca801c25-e76d-45ec-bc64-cf69292fd04d]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:12 compute-0 NetworkManager[1034]: <info>  [1759941072.8842] device (tapa49245b4-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:31:12 compute-0 NetworkManager[1034]: <info>  [1759941072.8853] device (tapa49245b4-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:31:12 compute-0 systemd-machined[77548]: New machine qemu-15-instance-00000012.
Oct 08 16:31:12 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-00000012.
Oct 08 16:31:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:12.907 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a4fc1a-cc3e-4524-90cb-568a9617e6ef]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:12.912 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[fc1d3f3b-c5e3-4a24-ab5b-4dde7dc46fa6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:12.948 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[445e2b21-c606-4bd3-a4e9-d12f0d70c52e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:12.964 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[73b23b5e-9eef-4c1b-ba36-ce68896a19da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeaa04398-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:0e:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 233093, 'reachable_time': 16299, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 148475, 'error': None, 'target': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:12.986 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[7a5e6f39-e074-4a13-ae28-aad86fbc8653]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapeaa04398-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 233107, 'tstamp': 233107}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 148479, 'error': None, 'target': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapeaa04398-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 233110, 'tstamp': 233110}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 148479, 'error': None, 'target': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:12.987 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeaa04398-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:31:12 compute-0 nova_compute[117413]: 2025-10-08 16:31:12.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:12.990 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeaa04398-50, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:31:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:12.990 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:31:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:12.990 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeaa04398-50, col_values=(('external_ids', {'iface-id': '4ea2ba2c-a72c-4801-bc39-9e3f03f24d1c'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:31:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:12.990 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:31:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:12.992 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a8050b-ed1e-4daf-8746-2b48a637ed81]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-eaa04398-576e-4a18-a2fe-a6a0b2d52eea\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID eaa04398-576e-4a18-a2fe-a6a0b2d52eea\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:13 compute-0 nova_compute[117413]: 2025-10-08 16:31:13.274 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance 0d360634-7168-4ee5-af98-052f1a3003a3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:31:13 compute-0 nova_compute[117413]: 2025-10-08 16:31:13.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:13 compute-0 nova_compute[117413]: 2025-10-08 16:31:13.783 2 WARNING nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance a35118d1-edb1-4920-8978-2712410057a1 has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.
Oct 08 16:31:13 compute-0 nova_compute[117413]: 2025-10-08 16:31:13.783 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:31:13 compute-0 nova_compute[117413]: 2025-10-08 16:31:13.784 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:31:11 up 39 min,  0 user,  load average: 0.21, 0.24, 0.25\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_621f620ded214ac792354cb32ce3de49': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:31:13 compute-0 nova_compute[117413]: 2025-10-08 16:31:13.841 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:31:14 compute-0 nova_compute[117413]: 2025-10-08 16:31:14.351 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:31:14 compute-0 nova_compute[117413]: 2025-10-08 16:31:14.862 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:31:14 compute-0 nova_compute[117413]: 2025-10-08 16:31:14.863 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.656s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:31:15 compute-0 ovn_controller[19768]: 2025-10-08T16:31:15Z|00161|binding|INFO|Claiming lport a49245b4-aeef-48fb-9de7-cb4496fa9385 for this chassis.
Oct 08 16:31:15 compute-0 ovn_controller[19768]: 2025-10-08T16:31:15Z|00162|binding|INFO|a49245b4-aeef-48fb-9de7-cb4496fa9385: Claiming fa:16:3e:c5:bb:fd 10.100.0.9
Oct 08 16:31:15 compute-0 ovn_controller[19768]: 2025-10-08T16:31:15Z|00163|binding|INFO|Setting lport a49245b4-aeef-48fb-9de7-cb4496fa9385 up in Southbound
Oct 08 16:31:16 compute-0 nova_compute[117413]: 2025-10-08 16:31:16.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:16 compute-0 nova_compute[117413]: 2025-10-08 16:31:16.457 2 INFO nova.compute.manager [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Post operation of migration started
Oct 08 16:31:16 compute-0 nova_compute[117413]: 2025-10-08 16:31:16.458 2 WARNING neutronclient.v2_0.client [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:31:16 compute-0 nova_compute[117413]: 2025-10-08 16:31:16.689 2 WARNING neutronclient.v2_0.client [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:31:16 compute-0 nova_compute[117413]: 2025-10-08 16:31:16.689 2 WARNING neutronclient.v2_0.client [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:31:16 compute-0 nova_compute[117413]: 2025-10-08 16:31:16.771 2 DEBUG oslo_concurrency.lockutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-a35118d1-edb1-4920-8978-2712410057a1" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:31:16 compute-0 nova_compute[117413]: 2025-10-08 16:31:16.772 2 DEBUG oslo_concurrency.lockutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-a35118d1-edb1-4920-8978-2712410057a1" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:31:16 compute-0 nova_compute[117413]: 2025-10-08 16:31:16.772 2 DEBUG nova.network.neutron [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:31:17 compute-0 nova_compute[117413]: 2025-10-08 16:31:17.279 2 WARNING neutronclient.v2_0.client [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:31:17 compute-0 nova_compute[117413]: 2025-10-08 16:31:17.864 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:31:18 compute-0 nova_compute[117413]: 2025-10-08 16:31:18.109 2 WARNING neutronclient.v2_0.client [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:31:18 compute-0 nova_compute[117413]: 2025-10-08 16:31:18.270 2 DEBUG nova.network.neutron [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Updating instance_info_cache with network_info: [{"id": "a49245b4-aeef-48fb-9de7-cb4496fa9385", "address": "fa:16:3e:c5:bb:fd", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49245b4-ae", "ovs_interfaceid": "a49245b4-aeef-48fb-9de7-cb4496fa9385", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:31:18 compute-0 nova_compute[117413]: 2025-10-08 16:31:18.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:18 compute-0 nova_compute[117413]: 2025-10-08 16:31:18.778 2 DEBUG oslo_concurrency.lockutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-a35118d1-edb1-4920-8978-2712410057a1" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:31:19 compute-0 nova_compute[117413]: 2025-10-08 16:31:19.296 2 DEBUG oslo_concurrency.lockutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:31:19 compute-0 nova_compute[117413]: 2025-10-08 16:31:19.297 2 DEBUG oslo_concurrency.lockutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:31:19 compute-0 nova_compute[117413]: 2025-10-08 16:31:19.297 2 DEBUG oslo_concurrency.lockutils [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:31:19 compute-0 nova_compute[117413]: 2025-10-08 16:31:19.301 2 INFO nova.virt.libvirt.driver [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 08 16:31:19 compute-0 virtqemud[117740]: Domain id=15 name='instance-00000012' uuid=a35118d1-edb1-4920-8978-2712410057a1 is tainted: custom-monitor
Oct 08 16:31:19 compute-0 podman[148502]: 2025-10-08 16:31:19.460668989 +0000 UTC m=+0.068556558 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS)
Oct 08 16:31:20 compute-0 nova_compute[117413]: 2025-10-08 16:31:20.307 2 INFO nova.virt.libvirt.driver [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 08 16:31:21 compute-0 nova_compute[117413]: 2025-10-08 16:31:21.314 2 INFO nova.virt.libvirt.driver [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 08 16:31:21 compute-0 nova_compute[117413]: 2025-10-08 16:31:21.318 2 DEBUG nova.compute.manager [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:31:21 compute-0 nova_compute[117413]: 2025-10-08 16:31:21.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:21 compute-0 nova_compute[117413]: 2025-10-08 16:31:21.828 2 DEBUG nova.objects.instance [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Oct 08 16:31:22 compute-0 nova_compute[117413]: 2025-10-08 16:31:22.846 2 WARNING neutronclient.v2_0.client [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:31:23 compute-0 nova_compute[117413]: 2025-10-08 16:31:23.048 2 WARNING neutronclient.v2_0.client [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:31:23 compute-0 nova_compute[117413]: 2025-10-08 16:31:23.048 2 WARNING neutronclient.v2_0.client [None req-b165aa39-aff2-4c7a-beaf-71b6ce7ff211 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:31:23 compute-0 nova_compute[117413]: 2025-10-08 16:31:23.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:24 compute-0 podman[148522]: 2025-10-08 16:31:24.466619165 +0000 UTC m=+0.077854875 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-type=git, version=9.6, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 08 16:31:26 compute-0 nova_compute[117413]: 2025-10-08 16:31:26.289 2 DEBUG oslo_concurrency.lockutils [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "0d360634-7168-4ee5-af98-052f1a3003a3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:31:26 compute-0 nova_compute[117413]: 2025-10-08 16:31:26.290 2 DEBUG oslo_concurrency.lockutils [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "0d360634-7168-4ee5-af98-052f1a3003a3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:31:26 compute-0 nova_compute[117413]: 2025-10-08 16:31:26.290 2 DEBUG oslo_concurrency.lockutils [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "0d360634-7168-4ee5-af98-052f1a3003a3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:31:26 compute-0 nova_compute[117413]: 2025-10-08 16:31:26.291 2 DEBUG oslo_concurrency.lockutils [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "0d360634-7168-4ee5-af98-052f1a3003a3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:31:26 compute-0 nova_compute[117413]: 2025-10-08 16:31:26.291 2 DEBUG oslo_concurrency.lockutils [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "0d360634-7168-4ee5-af98-052f1a3003a3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:31:26 compute-0 nova_compute[117413]: 2025-10-08 16:31:26.304 2 INFO nova.compute.manager [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Terminating instance
Oct 08 16:31:26 compute-0 nova_compute[117413]: 2025-10-08 16:31:26.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:26 compute-0 nova_compute[117413]: 2025-10-08 16:31:26.825 2 DEBUG nova.compute.manager [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:31:26 compute-0 kernel: tap2d5e5ce6-d5 (unregistering): left promiscuous mode
Oct 08 16:31:26 compute-0 NetworkManager[1034]: <info>  [1759941086.8565] device (tap2d5e5ce6-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:31:26 compute-0 ovn_controller[19768]: 2025-10-08T16:31:26Z|00164|binding|INFO|Releasing lport 2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f from this chassis (sb_readonly=0)
Oct 08 16:31:26 compute-0 ovn_controller[19768]: 2025-10-08T16:31:26Z|00165|binding|INFO|Setting lport 2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f down in Southbound
Oct 08 16:31:26 compute-0 ovn_controller[19768]: 2025-10-08T16:31:26Z|00166|binding|INFO|Removing iface tap2d5e5ce6-d5 ovn-installed in OVS
Oct 08 16:31:26 compute-0 nova_compute[117413]: 2025-10-08 16:31:26.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:26 compute-0 nova_compute[117413]: 2025-10-08 16:31:26.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:26.874 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:26:e7 10.100.0.11'], port_security=['fa:16:3e:14:26:e7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0d360634-7168-4ee5-af98-052f1a3003a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '621f620ded214ac792354cb32ce3de49', 'neutron:revision_number': '5', 'neutron:security_group_ids': '44bcc6c4-216d-4445-a813-dacde8875d66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a57d6524-8805-463f-b41a-d3218c332981, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:31:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:26.875 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f in datapath eaa04398-576e-4a18-a2fe-a6a0b2d52eea unbound from our chassis
Oct 08 16:31:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:26.877 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eaa04398-576e-4a18-a2fe-a6a0b2d52eea
Oct 08 16:31:26 compute-0 nova_compute[117413]: 2025-10-08 16:31:26.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:26.899 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[684266e2-fa06-4132-a00c-adcf03b936a4]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:26 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000013.scope: Deactivated successfully.
Oct 08 16:31:26 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000013.scope: Consumed 13.252s CPU time.
Oct 08 16:31:26 compute-0 systemd-machined[77548]: Machine qemu-14-instance-00000013 terminated.
Oct 08 16:31:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:26.933 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee4bdf2-d5a8-45ed-aa16-4476a631aac6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:26.936 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba83548-64ba-41e2-9fe5-49e58b941e51]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:26.972 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[9177967b-7ac7-403f-baf2-609b7dae753d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:26.991 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[2a6588a1-fd28-4030-b790-319e83d5a418]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeaa04398-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cb:0e:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 233093, 'reachable_time': 16299, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 148555, 'error': None, 'target': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:27 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:27.014 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[2c0f9b4d-f9da-453e-a205-fbaf4cf71e8d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapeaa04398-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 233107, 'tstamp': 233107}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 148556, 'error': None, 'target': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapeaa04398-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 233110, 'tstamp': 233110}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 148556, 'error': None, 'target': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:27 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:27.016 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeaa04398-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:27 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:27.022 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeaa04398-50, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:31:27 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:27.023 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:31:27 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:27.023 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeaa04398-50, col_values=(('external_ids', {'iface-id': '4ea2ba2c-a72c-4801-bc39-9e3f03f24d1c'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:31:27 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:27.023 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:31:27 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:27.024 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e011d2fc-7757-4963-821f-ada7462dde5f]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-eaa04398-576e-4a18-a2fe-a6a0b2d52eea\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID eaa04398-576e-4a18-a2fe-a6a0b2d52eea\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.101 2 INFO nova.virt.libvirt.driver [-] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Instance destroyed successfully.
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.102 2 DEBUG nova.objects.instance [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lazy-loading 'resources' on Instance uuid 0d360634-7168-4ee5-af98-052f1a3003a3 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.110 2 DEBUG nova.compute.manager [req-60c84917-a469-41db-bb5e-a37ebe55da9d req-955f26a7-5ad6-49cf-a0e1-6a3361353e4b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Received event network-vif-unplugged-2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.110 2 DEBUG oslo_concurrency.lockutils [req-60c84917-a469-41db-bb5e-a37ebe55da9d req-955f26a7-5ad6-49cf-a0e1-6a3361353e4b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "0d360634-7168-4ee5-af98-052f1a3003a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.110 2 DEBUG oslo_concurrency.lockutils [req-60c84917-a469-41db-bb5e-a37ebe55da9d req-955f26a7-5ad6-49cf-a0e1-6a3361353e4b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0d360634-7168-4ee5-af98-052f1a3003a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.111 2 DEBUG oslo_concurrency.lockutils [req-60c84917-a469-41db-bb5e-a37ebe55da9d req-955f26a7-5ad6-49cf-a0e1-6a3361353e4b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0d360634-7168-4ee5-af98-052f1a3003a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.111 2 DEBUG nova.compute.manager [req-60c84917-a469-41db-bb5e-a37ebe55da9d req-955f26a7-5ad6-49cf-a0e1-6a3361353e4b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] No waiting events found dispatching network-vif-unplugged-2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.111 2 DEBUG nova.compute.manager [req-60c84917-a469-41db-bb5e-a37ebe55da9d req-955f26a7-5ad6-49cf-a0e1-6a3361353e4b c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Received event network-vif-unplugged-2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.625 2 DEBUG nova.virt.libvirt.vif [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-08T16:30:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-902349060',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutenoderesourceconsolidationstrategy-server-902',id=19,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:30:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='621f620ded214ac792354cb32ce3de49',ramdisk_id='',reservation_id='r-w05ohmnw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320',owner_user_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:30:43Z,user_data=None,user_id='a35a495eee564e31a6dce3a5c601665c',uuid=0d360634-7168-4ee5-af98-052f1a3003a3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f", "address": "fa:16:3e:14:26:e7", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d5e5ce6-d5", "ovs_interfaceid": "2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.626 2 DEBUG nova.network.os_vif_util [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Converting VIF {"id": "2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f", "address": "fa:16:3e:14:26:e7", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d5e5ce6-d5", "ovs_interfaceid": "2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.627 2 DEBUG nova.network.os_vif_util [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:26:e7,bridge_name='br-int',has_traffic_filtering=True,id=2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d5e5ce6-d5') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.627 2 DEBUG os_vif [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:26:e7,bridge_name='br-int',has_traffic_filtering=True,id=2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d5e5ce6-d5') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.629 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d5e5ce6-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.635 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=6cee7474-5a33-4e74-818c-535b9bea510e) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.639 2 INFO os_vif [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:26:e7,bridge_name='br-int',has_traffic_filtering=True,id=2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d5e5ce6-d5')
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.640 2 INFO nova.virt.libvirt.driver [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Deleting instance files /var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3_del
Oct 08 16:31:27 compute-0 nova_compute[117413]: 2025-10-08 16:31:27.641 2 INFO nova.virt.libvirt.driver [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Deletion of /var/lib/nova/instances/0d360634-7168-4ee5-af98-052f1a3003a3_del complete
Oct 08 16:31:28 compute-0 nova_compute[117413]: 2025-10-08 16:31:28.153 2 INFO nova.compute.manager [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Took 1.33 seconds to destroy the instance on the hypervisor.
Oct 08 16:31:28 compute-0 nova_compute[117413]: 2025-10-08 16:31:28.153 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:31:28 compute-0 nova_compute[117413]: 2025-10-08 16:31:28.153 2 DEBUG nova.compute.manager [-] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:31:28 compute-0 nova_compute[117413]: 2025-10-08 16:31:28.153 2 DEBUG nova.network.neutron [-] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:31:28 compute-0 nova_compute[117413]: 2025-10-08 16:31:28.154 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:31:28 compute-0 nova_compute[117413]: 2025-10-08 16:31:28.478 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:31:28 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:28.691 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:31:28 compute-0 nova_compute[117413]: 2025-10-08 16:31:28.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:28 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:28.693 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:31:28 compute-0 nova_compute[117413]: 2025-10-08 16:31:28.792 2 DEBUG nova.compute.manager [req-366ab051-4352-4b86-91c4-a55c4eac4dca req-6d054372-581a-4716-8a41-890cfe87ecc9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Received event network-vif-deleted-2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:31:28 compute-0 nova_compute[117413]: 2025-10-08 16:31:28.793 2 INFO nova.compute.manager [req-366ab051-4352-4b86-91c4-a55c4eac4dca req-6d054372-581a-4716-8a41-890cfe87ecc9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Neutron deleted interface 2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f; detaching it from the instance and deleting it from the info cache
Oct 08 16:31:28 compute-0 nova_compute[117413]: 2025-10-08 16:31:28.793 2 DEBUG nova.network.neutron [req-366ab051-4352-4b86-91c4-a55c4eac4dca req-6d054372-581a-4716-8a41-890cfe87ecc9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:31:29 compute-0 nova_compute[117413]: 2025-10-08 16:31:29.161 2 DEBUG nova.compute.manager [req-9f4c20c9-ac2a-47d7-ab68-037609634ee7 req-ffef6423-6f93-4e93-8f18-4467d812bb97 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Received event network-vif-unplugged-2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:31:29 compute-0 nova_compute[117413]: 2025-10-08 16:31:29.161 2 DEBUG oslo_concurrency.lockutils [req-9f4c20c9-ac2a-47d7-ab68-037609634ee7 req-ffef6423-6f93-4e93-8f18-4467d812bb97 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "0d360634-7168-4ee5-af98-052f1a3003a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:31:29 compute-0 nova_compute[117413]: 2025-10-08 16:31:29.162 2 DEBUG oslo_concurrency.lockutils [req-9f4c20c9-ac2a-47d7-ab68-037609634ee7 req-ffef6423-6f93-4e93-8f18-4467d812bb97 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0d360634-7168-4ee5-af98-052f1a3003a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:31:29 compute-0 nova_compute[117413]: 2025-10-08 16:31:29.162 2 DEBUG oslo_concurrency.lockutils [req-9f4c20c9-ac2a-47d7-ab68-037609634ee7 req-ffef6423-6f93-4e93-8f18-4467d812bb97 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0d360634-7168-4ee5-af98-052f1a3003a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:31:29 compute-0 nova_compute[117413]: 2025-10-08 16:31:29.162 2 DEBUG nova.compute.manager [req-9f4c20c9-ac2a-47d7-ab68-037609634ee7 req-ffef6423-6f93-4e93-8f18-4467d812bb97 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] No waiting events found dispatching network-vif-unplugged-2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:31:29 compute-0 nova_compute[117413]: 2025-10-08 16:31:29.162 2 DEBUG nova.compute.manager [req-9f4c20c9-ac2a-47d7-ab68-037609634ee7 req-ffef6423-6f93-4e93-8f18-4467d812bb97 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Received event network-vif-unplugged-2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:31:29 compute-0 nova_compute[117413]: 2025-10-08 16:31:29.235 2 DEBUG nova.network.neutron [-] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:31:29 compute-0 nova_compute[117413]: 2025-10-08 16:31:29.300 2 DEBUG nova.compute.manager [req-366ab051-4352-4b86-91c4-a55c4eac4dca req-6d054372-581a-4716-8a41-890cfe87ecc9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Detach interface failed, port_id=2d5e5ce6-d5b1-41f3-be25-1ad2ab0dd16f, reason: Instance 0d360634-7168-4ee5-af98-052f1a3003a3 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 08 16:31:29 compute-0 nova_compute[117413]: 2025-10-08 16:31:29.742 2 INFO nova.compute.manager [-] [instance: 0d360634-7168-4ee5-af98-052f1a3003a3] Took 1.59 seconds to deallocate network for instance.
Oct 08 16:31:29 compute-0 podman[127881]: time="2025-10-08T16:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:31:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:31:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3485 "" "Go-http-client/1.1"
Oct 08 16:31:30 compute-0 nova_compute[117413]: 2025-10-08 16:31:30.269 2 DEBUG oslo_concurrency.lockutils [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:31:30 compute-0 nova_compute[117413]: 2025-10-08 16:31:30.269 2 DEBUG oslo_concurrency.lockutils [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:31:30 compute-0 nova_compute[117413]: 2025-10-08 16:31:30.341 2 DEBUG nova.compute.provider_tree [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:31:30 compute-0 podman[148576]: 2025-10-08 16:31:30.473853137 +0000 UTC m=+0.075519389 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4)
Oct 08 16:31:30 compute-0 nova_compute[117413]: 2025-10-08 16:31:30.852 2 DEBUG nova.scheduler.client.report [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:31:31 compute-0 nova_compute[117413]: 2025-10-08 16:31:31.361 2 DEBUG oslo_concurrency.lockutils [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.092s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:31:31 compute-0 nova_compute[117413]: 2025-10-08 16:31:31.381 2 INFO nova.scheduler.client.report [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Deleted allocations for instance 0d360634-7168-4ee5-af98-052f1a3003a3
Oct 08 16:31:31 compute-0 nova_compute[117413]: 2025-10-08 16:31:31.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:31 compute-0 openstack_network_exporter[130039]: ERROR   16:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:31:31 compute-0 openstack_network_exporter[130039]: ERROR   16:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:31:31 compute-0 openstack_network_exporter[130039]: ERROR   16:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:31:31 compute-0 openstack_network_exporter[130039]: ERROR   16:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:31:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:31:31 compute-0 openstack_network_exporter[130039]: ERROR   16:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:31:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:31:32 compute-0 nova_compute[117413]: 2025-10-08 16:31:32.415 2 DEBUG oslo_concurrency.lockutils [None req-6321d73f-6412-40e9-a2e4-aed7a2979b0b a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "0d360634-7168-4ee5-af98-052f1a3003a3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.125s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:31:32 compute-0 nova_compute[117413]: 2025-10-08 16:31:32.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:33 compute-0 nova_compute[117413]: 2025-10-08 16:31:33.128 2 DEBUG oslo_concurrency.lockutils [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "a35118d1-edb1-4920-8978-2712410057a1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:31:33 compute-0 nova_compute[117413]: 2025-10-08 16:31:33.129 2 DEBUG oslo_concurrency.lockutils [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "a35118d1-edb1-4920-8978-2712410057a1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:31:33 compute-0 nova_compute[117413]: 2025-10-08 16:31:33.129 2 DEBUG oslo_concurrency.lockutils [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "a35118d1-edb1-4920-8978-2712410057a1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:31:33 compute-0 nova_compute[117413]: 2025-10-08 16:31:33.129 2 DEBUG oslo_concurrency.lockutils [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "a35118d1-edb1-4920-8978-2712410057a1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:31:33 compute-0 nova_compute[117413]: 2025-10-08 16:31:33.130 2 DEBUG oslo_concurrency.lockutils [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "a35118d1-edb1-4920-8978-2712410057a1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:31:33 compute-0 nova_compute[117413]: 2025-10-08 16:31:33.144 2 INFO nova.compute.manager [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Terminating instance
Oct 08 16:31:33 compute-0 podman[148596]: 2025-10-08 16:31:33.468261288 +0000 UTC m=+0.069229128 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 08 16:31:33 compute-0 nova_compute[117413]: 2025-10-08 16:31:33.662 2 DEBUG nova.compute.manager [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:31:33 compute-0 kernel: tapa49245b4-ae (unregistering): left promiscuous mode
Oct 08 16:31:33 compute-0 NetworkManager[1034]: <info>  [1759941093.6924] device (tapa49245b4-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:31:33 compute-0 ovn_controller[19768]: 2025-10-08T16:31:33Z|00167|binding|INFO|Releasing lport a49245b4-aeef-48fb-9de7-cb4496fa9385 from this chassis (sb_readonly=0)
Oct 08 16:31:33 compute-0 ovn_controller[19768]: 2025-10-08T16:31:33Z|00168|binding|INFO|Setting lport a49245b4-aeef-48fb-9de7-cb4496fa9385 down in Southbound
Oct 08 16:31:33 compute-0 ovn_controller[19768]: 2025-10-08T16:31:33Z|00169|binding|INFO|Removing iface tapa49245b4-ae ovn-installed in OVS
Oct 08 16:31:33 compute-0 nova_compute[117413]: 2025-10-08 16:31:33.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:33.749 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:bb:fd 10.100.0.9'], port_security=['fa:16:3e:c5:bb:fd 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a35118d1-edb1-4920-8978-2712410057a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '621f620ded214ac792354cb32ce3de49', 'neutron:revision_number': '14', 'neutron:security_group_ids': '44bcc6c4-216d-4445-a813-dacde8875d66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a57d6524-8805-463f-b41a-d3218c332981, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=a49245b4-aeef-48fb-9de7-cb4496fa9385) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:31:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:33.749 28633 INFO neutron.agent.ovn.metadata.agent [-] Port a49245b4-aeef-48fb-9de7-cb4496fa9385 in datapath eaa04398-576e-4a18-a2fe-a6a0b2d52eea unbound from our chassis
Oct 08 16:31:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:33.750 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eaa04398-576e-4a18-a2fe-a6a0b2d52eea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:31:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:33.751 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e91f6df0-9b29-4c8a-af2e-cb2c7e4d294b]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:33.752 28633 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea namespace which is not needed anymore
Oct 08 16:31:33 compute-0 nova_compute[117413]: 2025-10-08 16:31:33.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:33 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000012.scope: Deactivated successfully.
Oct 08 16:31:33 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000012.scope: Consumed 2.145s CPU time.
Oct 08 16:31:33 compute-0 systemd-machined[77548]: Machine qemu-15-instance-00000012 terminated.
Oct 08 16:31:33 compute-0 neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea[148252]: [NOTICE]   (148256) : haproxy version is 3.0.5-8e879a5
Oct 08 16:31:33 compute-0 neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea[148252]: [NOTICE]   (148256) : path to executable is /usr/sbin/haproxy
Oct 08 16:31:33 compute-0 neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea[148252]: [WARNING]  (148256) : Exiting Master process...
Oct 08 16:31:33 compute-0 podman[148641]: 2025-10-08 16:31:33.885223366 +0000 UTC m=+0.035896161 container kill 503ed678e124233304c320c60fc36c2fafd977ddce17db278837459370f75811 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest)
Oct 08 16:31:33 compute-0 neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea[148252]: [ALERT]    (148256) : Current worker (148258) exited with code 143 (Terminated)
Oct 08 16:31:33 compute-0 neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea[148252]: [WARNING]  (148256) : All workers exited. Exiting... (0)
Oct 08 16:31:33 compute-0 systemd[1]: libpod-503ed678e124233304c320c60fc36c2fafd977ddce17db278837459370f75811.scope: Deactivated successfully.
Oct 08 16:31:33 compute-0 podman[148661]: 2025-10-08 16:31:33.929292041 +0000 UTC m=+0.027349946 container died 503ed678e124233304c320c60fc36c2fafd977ddce17db278837459370f75811 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 08 16:31:33 compute-0 nova_compute[117413]: 2025-10-08 16:31:33.934 2 INFO nova.virt.libvirt.driver [-] [instance: a35118d1-edb1-4920-8978-2712410057a1] Instance destroyed successfully.
Oct 08 16:31:33 compute-0 nova_compute[117413]: 2025-10-08 16:31:33.934 2 DEBUG nova.objects.instance [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lazy-loading 'resources' on Instance uuid a35118d1-edb1-4920-8978-2712410057a1 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:31:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-503ed678e124233304c320c60fc36c2fafd977ddce17db278837459370f75811-userdata-shm.mount: Deactivated successfully.
Oct 08 16:31:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f6c74842704733934ec4d2a0902546b5a05b11e1ec472978b4d0c62dfa954be-merged.mount: Deactivated successfully.
Oct 08 16:31:33 compute-0 podman[148661]: 2025-10-08 16:31:33.975919229 +0000 UTC m=+0.073977114 container cleanup 503ed678e124233304c320c60fc36c2fafd977ddce17db278837459370f75811 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 08 16:31:33 compute-0 systemd[1]: libpod-conmon-503ed678e124233304c320c60fc36c2fafd977ddce17db278837459370f75811.scope: Deactivated successfully.
Oct 08 16:31:33 compute-0 podman[148676]: 2025-10-08 16:31:33.99722381 +0000 UTC m=+0.073186071 container remove 503ed678e124233304c320c60fc36c2fafd977ddce17db278837459370f75811 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 16:31:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:34.016 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[01d66c42-41b4-4b2f-af21-e7b97364c0f7]: (4, ("Wed Oct  8 04:31:33 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea (503ed678e124233304c320c60fc36c2fafd977ddce17db278837459370f75811)\n503ed678e124233304c320c60fc36c2fafd977ddce17db278837459370f75811\nWed Oct  8 04:31:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea (503ed678e124233304c320c60fc36c2fafd977ddce17db278837459370f75811)\n503ed678e124233304c320c60fc36c2fafd977ddce17db278837459370f75811\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:34.019 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[793a823b-8862-487a-a374-5391260ce4e1]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:34.019 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eaa04398-576e-4a18-a2fe-a6a0b2d52eea.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:31:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:34.020 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[5df31ed6-4c01-468f-9ede-00a856e0d8a0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:34.021 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeaa04398-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:34 compute-0 kernel: tapeaa04398-50: left promiscuous mode
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:34.043 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[c370ebd5-9645-4e96-8efa-a01b0b77ea13]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:34.069 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[176979fb-fcec-472a-9e88-63d60e9c086a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:34.070 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f0b91c-3b30-4d69-8992-3300628d8eba]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:34.086 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[df2b883f-7ef8-4885-9b1a-ae87b68d0e1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 233085, 'reachable_time': 36030, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 148703, 'error': None, 'target': 'ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:34.088 28777 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eaa04398-576e-4a18-a2fe-a6a0b2d52eea deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 08 16:31:34 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:34.088 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[11a3354f-8dda-4bab-a632-fab912763197]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:31:34 compute-0 systemd[1]: run-netns-ovnmeta\x2deaa04398\x2d576e\x2d4a18\x2da2fe\x2da6a0b2d52eea.mount: Deactivated successfully.
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.144 2 DEBUG nova.compute.manager [req-8ba108ba-461f-495e-a8f2-8fe3ec40ddb8 req-aadec2ec-30d5-4791-8ae6-b0d06289d9a0 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Received event network-vif-unplugged-a49245b4-aeef-48fb-9de7-cb4496fa9385 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.145 2 DEBUG oslo_concurrency.lockutils [req-8ba108ba-461f-495e-a8f2-8fe3ec40ddb8 req-aadec2ec-30d5-4791-8ae6-b0d06289d9a0 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "a35118d1-edb1-4920-8978-2712410057a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.146 2 DEBUG oslo_concurrency.lockutils [req-8ba108ba-461f-495e-a8f2-8fe3ec40ddb8 req-aadec2ec-30d5-4791-8ae6-b0d06289d9a0 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "a35118d1-edb1-4920-8978-2712410057a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.146 2 DEBUG oslo_concurrency.lockutils [req-8ba108ba-461f-495e-a8f2-8fe3ec40ddb8 req-aadec2ec-30d5-4791-8ae6-b0d06289d9a0 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "a35118d1-edb1-4920-8978-2712410057a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.147 2 DEBUG nova.compute.manager [req-8ba108ba-461f-495e-a8f2-8fe3ec40ddb8 req-aadec2ec-30d5-4791-8ae6-b0d06289d9a0 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] No waiting events found dispatching network-vif-unplugged-a49245b4-aeef-48fb-9de7-cb4496fa9385 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.147 2 DEBUG nova.compute.manager [req-8ba108ba-461f-495e-a8f2-8fe3ec40ddb8 req-aadec2ec-30d5-4791-8ae6-b0d06289d9a0 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Received event network-vif-unplugged-a49245b4-aeef-48fb-9de7-cb4496fa9385 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.443 2 DEBUG nova.virt.libvirt.vif [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,compute_id=2,config_drive='True',created_at=2025-10-08T16:30:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteNodeResourceConsolidationStrategy-server-713322440',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutenoderesourceconsolidationstrategy-server-713',id=18,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:30:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='621f620ded214ac792354cb32ce3de49',ramdisk_id='',reservation_id='r-5t77m6gu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',clean_attempts='1',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320',owner_user_name='tempest-TestExecuteNodeResourceConsolidationStrategy-1882474320-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:31:22Z,user_data=None,user_id='a35a495eee564e31a6dce3a5c601665c',uuid=a35118d1-edb1-4920-8978-2712410057a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a49245b4-aeef-48fb-9de7-cb4496fa9385", "address": "fa:16:3e:c5:bb:fd", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49245b4-ae", "ovs_interfaceid": "a49245b4-aeef-48fb-9de7-cb4496fa9385", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.443 2 DEBUG nova.network.os_vif_util [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Converting VIF {"id": "a49245b4-aeef-48fb-9de7-cb4496fa9385", "address": "fa:16:3e:c5:bb:fd", "network": {"id": "eaa04398-576e-4a18-a2fe-a6a0b2d52eea", "bridge": "br-int", "label": "tempest-TestExecuteNodeResourceConsolidationStrategy-1706316997-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7d8211aa56344219a4778e4641775b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa49245b4-ae", "ovs_interfaceid": "a49245b4-aeef-48fb-9de7-cb4496fa9385", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.444 2 DEBUG nova.network.os_vif_util [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c5:bb:fd,bridge_name='br-int',has_traffic_filtering=True,id=a49245b4-aeef-48fb-9de7-cb4496fa9385,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49245b4-ae') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.444 2 DEBUG os_vif [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c5:bb:fd,bridge_name='br-int',has_traffic_filtering=True,id=a49245b4-aeef-48fb-9de7-cb4496fa9385,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49245b4-ae') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.446 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa49245b4-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.451 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=ee814532-e9a2-4448-a6dd-a0c8347be1f3) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.454 2 INFO os_vif [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c5:bb:fd,bridge_name='br-int',has_traffic_filtering=True,id=a49245b4-aeef-48fb-9de7-cb4496fa9385,network=Network(eaa04398-576e-4a18-a2fe-a6a0b2d52eea),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa49245b4-ae')
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.454 2 INFO nova.virt.libvirt.driver [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Deleting instance files /var/lib/nova/instances/a35118d1-edb1-4920-8978-2712410057a1_del
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.455 2 INFO nova.virt.libvirt.driver [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Deletion of /var/lib/nova/instances/a35118d1-edb1-4920-8978-2712410057a1_del complete
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.967 2 INFO nova.compute.manager [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Took 1.30 seconds to destroy the instance on the hypervisor.
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.967 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.968 2 DEBUG nova.compute.manager [-] [instance: a35118d1-edb1-4920-8978-2712410057a1] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.968 2 DEBUG nova.network.neutron [-] [instance: a35118d1-edb1-4920-8978-2712410057a1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:31:34 compute-0 nova_compute[117413]: 2025-10-08 16:31:34.968 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:31:36 compute-0 nova_compute[117413]: 2025-10-08 16:31:36.013 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:31:36 compute-0 nova_compute[117413]: 2025-10-08 16:31:36.189 2 DEBUG nova.compute.manager [req-885b5931-13ab-49ce-b610-efd49cbd0b65 req-ca2ea19a-f0f4-40d7-8df8-a5d7405f21ac c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Received event network-vif-unplugged-a49245b4-aeef-48fb-9de7-cb4496fa9385 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:31:36 compute-0 nova_compute[117413]: 2025-10-08 16:31:36.190 2 DEBUG oslo_concurrency.lockutils [req-885b5931-13ab-49ce-b610-efd49cbd0b65 req-ca2ea19a-f0f4-40d7-8df8-a5d7405f21ac c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "a35118d1-edb1-4920-8978-2712410057a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:31:36 compute-0 nova_compute[117413]: 2025-10-08 16:31:36.190 2 DEBUG oslo_concurrency.lockutils [req-885b5931-13ab-49ce-b610-efd49cbd0b65 req-ca2ea19a-f0f4-40d7-8df8-a5d7405f21ac c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "a35118d1-edb1-4920-8978-2712410057a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:31:36 compute-0 nova_compute[117413]: 2025-10-08 16:31:36.190 2 DEBUG oslo_concurrency.lockutils [req-885b5931-13ab-49ce-b610-efd49cbd0b65 req-ca2ea19a-f0f4-40d7-8df8-a5d7405f21ac c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "a35118d1-edb1-4920-8978-2712410057a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:31:36 compute-0 nova_compute[117413]: 2025-10-08 16:31:36.190 2 DEBUG nova.compute.manager [req-885b5931-13ab-49ce-b610-efd49cbd0b65 req-ca2ea19a-f0f4-40d7-8df8-a5d7405f21ac c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] No waiting events found dispatching network-vif-unplugged-a49245b4-aeef-48fb-9de7-cb4496fa9385 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:31:36 compute-0 nova_compute[117413]: 2025-10-08 16:31:36.191 2 DEBUG nova.compute.manager [req-885b5931-13ab-49ce-b610-efd49cbd0b65 req-ca2ea19a-f0f4-40d7-8df8-a5d7405f21ac c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Received event network-vif-unplugged-a49245b4-aeef-48fb-9de7-cb4496fa9385 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:31:36 compute-0 nova_compute[117413]: 2025-10-08 16:31:36.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:37 compute-0 nova_compute[117413]: 2025-10-08 16:31:37.178 2 DEBUG nova.network.neutron [-] [instance: a35118d1-edb1-4920-8978-2712410057a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:31:37 compute-0 nova_compute[117413]: 2025-10-08 16:31:37.685 2 INFO nova.compute.manager [-] [instance: a35118d1-edb1-4920-8978-2712410057a1] Took 2.72 seconds to deallocate network for instance.
Oct 08 16:31:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:37.696 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:31:38 compute-0 nova_compute[117413]: 2025-10-08 16:31:38.204 2 DEBUG oslo_concurrency.lockutils [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:31:38 compute-0 nova_compute[117413]: 2025-10-08 16:31:38.204 2 DEBUG oslo_concurrency.lockutils [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:31:38 compute-0 nova_compute[117413]: 2025-10-08 16:31:38.209 2 DEBUG oslo_concurrency.lockutils [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:31:38 compute-0 nova_compute[117413]: 2025-10-08 16:31:38.235 2 DEBUG nova.compute.manager [req-756ba070-aa98-48bb-a162-e59a139b8806 req-96673ae3-40ca-4e9d-96de-8c0afaecc92f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: a35118d1-edb1-4920-8978-2712410057a1] Received event network-vif-deleted-a49245b4-aeef-48fb-9de7-cb4496fa9385 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:31:38 compute-0 nova_compute[117413]: 2025-10-08 16:31:38.240 2 INFO nova.scheduler.client.report [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Deleted allocations for instance a35118d1-edb1-4920-8978-2712410057a1
Oct 08 16:31:39 compute-0 nova_compute[117413]: 2025-10-08 16:31:39.273 2 DEBUG oslo_concurrency.lockutils [None req-9cb4f487-b528-465f-b228-482ecd99baba a35a495eee564e31a6dce3a5c601665c 621f620ded214ac792354cb32ce3de49 - - default default] Lock "a35118d1-edb1-4920-8978-2712410057a1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.144s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:31:39 compute-0 nova_compute[117413]: 2025-10-08 16:31:39.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:39 compute-0 podman[148704]: 2025-10-08 16:31:39.487747663 +0000 UTC m=+0.078643088 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:31:39 compute-0 podman[148705]: 2025-10-08 16:31:39.528625656 +0000 UTC m=+0.116088143 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 08 16:31:41 compute-0 nova_compute[117413]: 2025-10-08 16:31:41.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:41.915 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:31:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:41.915 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:31:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:31:41.916 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:31:44 compute-0 nova_compute[117413]: 2025-10-08 16:31:44.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:46 compute-0 nova_compute[117413]: 2025-10-08 16:31:46.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:49 compute-0 nova_compute[117413]: 2025-10-08 16:31:49.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:50 compute-0 podman[148754]: 2025-10-08 16:31:50.480825462 +0000 UTC m=+0.072626996 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:31:51 compute-0 nova_compute[117413]: 2025-10-08 16:31:51.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:54 compute-0 nova_compute[117413]: 2025-10-08 16:31:54.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:55 compute-0 podman[148774]: 2025-10-08 16:31:55.44022377 +0000 UTC m=+0.051399946 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, name=ubi9-minimal, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, io.openshift.expose-services=)
Oct 08 16:31:56 compute-0 nova_compute[117413]: 2025-10-08 16:31:56.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:58 compute-0 nova_compute[117413]: 2025-10-08 16:31:58.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:59 compute-0 nova_compute[117413]: 2025-10-08 16:31:59.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:31:59 compute-0 podman[127881]: time="2025-10-08T16:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:31:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:31:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3031 "" "Go-http-client/1.1"
Oct 08 16:32:01 compute-0 openstack_network_exporter[130039]: ERROR   16:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:32:01 compute-0 openstack_network_exporter[130039]: ERROR   16:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:32:01 compute-0 openstack_network_exporter[130039]: ERROR   16:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:32:01 compute-0 openstack_network_exporter[130039]: ERROR   16:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:32:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:32:01 compute-0 openstack_network_exporter[130039]: ERROR   16:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:32:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:32:01 compute-0 nova_compute[117413]: 2025-10-08 16:32:01.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:01 compute-0 podman[148793]: 2025-10-08 16:32:01.490151797 +0000 UTC m=+0.096693006 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, container_name=iscsid)
Oct 08 16:32:04 compute-0 podman[148814]: 2025-10-08 16:32:04.456558205 +0000 UTC m=+0.063890225 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 08 16:32:04 compute-0 nova_compute[117413]: 2025-10-08 16:32:04.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:05 compute-0 nova_compute[117413]: 2025-10-08 16:32:05.357 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:32:06 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:06.097 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:30:98 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-5952b311-868c-4aad-8037-80ac85c56954', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5952b311-868c-4aad-8037-80ac85c56954', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c9cc61fc52354bf197dc66c86673dfe2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5b730dc6-0991-46d6-9b8c-1ba4ba734d3f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=728e9503-a577-402b-9f9a-3eb90576507b) old=Port_Binding(mac=['fa:16:3e:64:30:98'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-5952b311-868c-4aad-8037-80ac85c56954', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5952b311-868c-4aad-8037-80ac85c56954', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c9cc61fc52354bf197dc66c86673dfe2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:32:06 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:06.097 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 728e9503-a577-402b-9f9a-3eb90576507b in datapath 5952b311-868c-4aad-8037-80ac85c56954 updated
Oct 08 16:32:06 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:06.098 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5952b311-868c-4aad-8037-80ac85c56954, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:32:06 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:06.099 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e96a1b21-853e-4d9c-a43a-8c9ac7194876]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:32:06 compute-0 nova_compute[117413]: 2025-10-08 16:32:06.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:07 compute-0 nova_compute[117413]: 2025-10-08 16:32:07.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:32:07 compute-0 nova_compute[117413]: 2025-10-08 16:32:07.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:32:09 compute-0 nova_compute[117413]: 2025-10-08 16:32:09.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:32:09 compute-0 nova_compute[117413]: 2025-10-08 16:32:09.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:32:09 compute-0 nova_compute[117413]: 2025-10-08 16:32:09.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:10 compute-0 nova_compute[117413]: 2025-10-08 16:32:10.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:32:10 compute-0 podman[148833]: 2025-10-08 16:32:10.483846751 +0000 UTC m=+0.084553337 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:32:10 compute-0 podman[148834]: 2025-10-08 16:32:10.521495222 +0000 UTC m=+0.107229859 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4)
Oct 08 16:32:11 compute-0 nova_compute[117413]: 2025-10-08 16:32:11.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:32:11 compute-0 nova_compute[117413]: 2025-10-08 16:32:11.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:32:11 compute-0 nova_compute[117413]: 2025-10-08 16:32:11.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:11 compute-0 nova_compute[117413]: 2025-10-08 16:32:11.875 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:32:11 compute-0 nova_compute[117413]: 2025-10-08 16:32:11.876 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:32:11 compute-0 nova_compute[117413]: 2025-10-08 16:32:11.876 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:32:11 compute-0 nova_compute[117413]: 2025-10-08 16:32:11.876 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:32:12 compute-0 nova_compute[117413]: 2025-10-08 16:32:12.062 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:32:12 compute-0 nova_compute[117413]: 2025-10-08 16:32:12.063 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:32:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:12.078 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:00:9c 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8097eba3-3a8f-4e01-956c-a5c13cdc7e49', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8097eba3-3a8f-4e01-956c-a5c13cdc7e49', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddb23534363a4dee8a87d68059ccced6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf5c91af-8727-4c6f-ae11-58b730fd8ab0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=b899055c-d202-4b54-8c5e-c1ee05e6806e) old=Port_Binding(mac=['fa:16:3e:21:00:9c'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-8097eba3-3a8f-4e01-956c-a5c13cdc7e49', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8097eba3-3a8f-4e01-956c-a5c13cdc7e49', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddb23534363a4dee8a87d68059ccced6', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:32:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:12.080 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port b899055c-d202-4b54-8c5e-c1ee05e6806e in datapath 8097eba3-3a8f-4e01-956c-a5c13cdc7e49 updated
Oct 08 16:32:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:12.081 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8097eba3-3a8f-4e01-956c-a5c13cdc7e49, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:32:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:12.081 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e78c31e5-d9bc-4a43-8141-6d087c14bfd2]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:32:12 compute-0 nova_compute[117413]: 2025-10-08 16:32:12.089 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.026s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:32:12 compute-0 nova_compute[117413]: 2025-10-08 16:32:12.090 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6134MB free_disk=73.25057220458984GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:32:12 compute-0 nova_compute[117413]: 2025-10-08 16:32:12.090 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:32:12 compute-0 nova_compute[117413]: 2025-10-08 16:32:12.090 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:32:13 compute-0 nova_compute[117413]: 2025-10-08 16:32:13.147 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:32:13 compute-0 nova_compute[117413]: 2025-10-08 16:32:13.148 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:32:12 up 40 min,  0 user,  load average: 0.07, 0.19, 0.23\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:32:13 compute-0 nova_compute[117413]: 2025-10-08 16:32:13.166 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:32:13 compute-0 nova_compute[117413]: 2025-10-08 16:32:13.673 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:32:14 compute-0 nova_compute[117413]: 2025-10-08 16:32:14.187 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:32:14 compute-0 nova_compute[117413]: 2025-10-08 16:32:14.188 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.097s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:32:14 compute-0 nova_compute[117413]: 2025-10-08 16:32:14.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:15 compute-0 nova_compute[117413]: 2025-10-08 16:32:15.188 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:32:16 compute-0 nova_compute[117413]: 2025-10-08 16:32:16.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:19 compute-0 nova_compute[117413]: 2025-10-08 16:32:19.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:20 compute-0 nova_compute[117413]: 2025-10-08 16:32:20.359 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:32:21 compute-0 nova_compute[117413]: 2025-10-08 16:32:21.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:21 compute-0 podman[148883]: 2025-10-08 16:32:21.496329119 +0000 UTC m=+0.091638803 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 08 16:32:24 compute-0 nova_compute[117413]: 2025-10-08 16:32:24.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:26 compute-0 nova_compute[117413]: 2025-10-08 16:32:26.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:26 compute-0 podman[148901]: 2025-10-08 16:32:26.455164841 +0000 UTC m=+0.064817411 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, vcs-type=git)
Oct 08 16:32:29 compute-0 nova_compute[117413]: 2025-10-08 16:32:29.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:29 compute-0 podman[127881]: time="2025-10-08T16:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:32:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:32:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3024 "" "Go-http-client/1.1"
Oct 08 16:32:31 compute-0 ovn_controller[19768]: 2025-10-08T16:32:31Z|00170|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Oct 08 16:32:31 compute-0 openstack_network_exporter[130039]: ERROR   16:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:32:31 compute-0 openstack_network_exporter[130039]: ERROR   16:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:32:31 compute-0 openstack_network_exporter[130039]: ERROR   16:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:32:31 compute-0 openstack_network_exporter[130039]: ERROR   16:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:32:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:32:31 compute-0 openstack_network_exporter[130039]: ERROR   16:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:32:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:32:31 compute-0 nova_compute[117413]: 2025-10-08 16:32:31.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:32 compute-0 podman[148922]: 2025-10-08 16:32:32.456008609 +0000 UTC m=+0.065942314 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible)
Oct 08 16:32:34 compute-0 nova_compute[117413]: 2025-10-08 16:32:34.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:35 compute-0 podman[148940]: 2025-10-08 16:32:35.46257804 +0000 UTC m=+0.063569756 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Oct 08 16:32:36 compute-0 nova_compute[117413]: 2025-10-08 16:32:36.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:37 compute-0 nova_compute[117413]: 2025-10-08 16:32:37.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:37.070 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:32:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:37.072 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:32:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:37.073 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:32:39 compute-0 nova_compute[117413]: 2025-10-08 16:32:39.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:41 compute-0 nova_compute[117413]: 2025-10-08 16:32:41.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:41 compute-0 podman[148960]: 2025-10-08 16:32:41.49134537 +0000 UTC m=+0.084582228 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:32:41 compute-0 podman[148961]: 2025-10-08 16:32:41.544087304 +0000 UTC m=+0.137401015 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 08 16:32:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:41.917 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:32:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:41.917 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:32:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:41.918 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:32:42 compute-0 nova_compute[117413]: 2025-10-08 16:32:42.260 2 DEBUG oslo_concurrency.lockutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Acquiring lock "214aa907-3edf-42c3-ac24-36294893f9df" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:32:42 compute-0 nova_compute[117413]: 2025-10-08 16:32:42.261 2 DEBUG oslo_concurrency.lockutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:32:42 compute-0 nova_compute[117413]: 2025-10-08 16:32:42.767 2 DEBUG nova.compute.manager [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 08 16:32:43 compute-0 nova_compute[117413]: 2025-10-08 16:32:43.323 2 DEBUG oslo_concurrency.lockutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:32:43 compute-0 nova_compute[117413]: 2025-10-08 16:32:43.324 2 DEBUG oslo_concurrency.lockutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:32:43 compute-0 nova_compute[117413]: 2025-10-08 16:32:43.331 2 DEBUG nova.virt.hardware [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 08 16:32:43 compute-0 nova_compute[117413]: 2025-10-08 16:32:43.332 2 INFO nova.compute.claims [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Claim successful on node compute-0.ctlplane.example.com
Oct 08 16:32:44 compute-0 nova_compute[117413]: 2025-10-08 16:32:44.390 2 DEBUG nova.compute.provider_tree [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:32:44 compute-0 nova_compute[117413]: 2025-10-08 16:32:44.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:44 compute-0 nova_compute[117413]: 2025-10-08 16:32:44.899 2 DEBUG nova.scheduler.client.report [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:32:45 compute-0 nova_compute[117413]: 2025-10-08 16:32:45.409 2 DEBUG oslo_concurrency.lockutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.085s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:32:45 compute-0 nova_compute[117413]: 2025-10-08 16:32:45.411 2 DEBUG nova.compute.manager [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 08 16:32:45 compute-0 nova_compute[117413]: 2025-10-08 16:32:45.926 2 DEBUG nova.compute.manager [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 08 16:32:45 compute-0 nova_compute[117413]: 2025-10-08 16:32:45.927 2 DEBUG nova.network.neutron [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 08 16:32:45 compute-0 nova_compute[117413]: 2025-10-08 16:32:45.927 2 WARNING neutronclient.v2_0.client [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:32:45 compute-0 nova_compute[117413]: 2025-10-08 16:32:45.928 2 WARNING neutronclient.v2_0.client [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:32:46 compute-0 nova_compute[117413]: 2025-10-08 16:32:46.439 2 INFO nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 08 16:32:46 compute-0 nova_compute[117413]: 2025-10-08 16:32:46.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:46 compute-0 nova_compute[117413]: 2025-10-08 16:32:46.506 2 DEBUG nova.network.neutron [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Successfully created port: 986fc122-b6de-499e-83b7-12ae4247a345 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 08 16:32:46 compute-0 nova_compute[117413]: 2025-10-08 16:32:46.952 2 DEBUG nova.compute.manager [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 08 16:32:47 compute-0 nova_compute[117413]: 2025-10-08 16:32:47.134 2 DEBUG nova.network.neutron [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Successfully updated port: 986fc122-b6de-499e-83b7-12ae4247a345 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 08 16:32:47 compute-0 nova_compute[117413]: 2025-10-08 16:32:47.212 2 DEBUG nova.compute.manager [req-dbd9adc2-892b-4701-9914-8c3bbf9b4edb req-20fa2e11-f1b6-4cd9-82df-43fe2a70cc3a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Received event network-changed-986fc122-b6de-499e-83b7-12ae4247a345 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:32:47 compute-0 nova_compute[117413]: 2025-10-08 16:32:47.213 2 DEBUG nova.compute.manager [req-dbd9adc2-892b-4701-9914-8c3bbf9b4edb req-20fa2e11-f1b6-4cd9-82df-43fe2a70cc3a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Refreshing instance network info cache due to event network-changed-986fc122-b6de-499e-83b7-12ae4247a345. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 08 16:32:47 compute-0 nova_compute[117413]: 2025-10-08 16:32:47.213 2 DEBUG oslo_concurrency.lockutils [req-dbd9adc2-892b-4701-9914-8c3bbf9b4edb req-20fa2e11-f1b6-4cd9-82df-43fe2a70cc3a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-214aa907-3edf-42c3-ac24-36294893f9df" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:32:47 compute-0 nova_compute[117413]: 2025-10-08 16:32:47.213 2 DEBUG oslo_concurrency.lockutils [req-dbd9adc2-892b-4701-9914-8c3bbf9b4edb req-20fa2e11-f1b6-4cd9-82df-43fe2a70cc3a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-214aa907-3edf-42c3-ac24-36294893f9df" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:32:47 compute-0 nova_compute[117413]: 2025-10-08 16:32:47.213 2 DEBUG nova.network.neutron [req-dbd9adc2-892b-4701-9914-8c3bbf9b4edb req-20fa2e11-f1b6-4cd9-82df-43fe2a70cc3a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Refreshing network info cache for port 986fc122-b6de-499e-83b7-12ae4247a345 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 08 16:32:47 compute-0 nova_compute[117413]: 2025-10-08 16:32:47.642 2 DEBUG oslo_concurrency.lockutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Acquiring lock "refresh_cache-214aa907-3edf-42c3-ac24-36294893f9df" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:32:47 compute-0 nova_compute[117413]: 2025-10-08 16:32:47.719 2 WARNING neutronclient.v2_0.client [req-dbd9adc2-892b-4701-9914-8c3bbf9b4edb req-20fa2e11-f1b6-4cd9-82df-43fe2a70cc3a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:32:47 compute-0 nova_compute[117413]: 2025-10-08 16:32:47.808 2 DEBUG nova.network.neutron [req-dbd9adc2-892b-4701-9914-8c3bbf9b4edb req-20fa2e11-f1b6-4cd9-82df-43fe2a70cc3a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:32:47 compute-0 nova_compute[117413]: 2025-10-08 16:32:47.950 2 DEBUG nova.network.neutron [req-dbd9adc2-892b-4701-9914-8c3bbf9b4edb req-20fa2e11-f1b6-4cd9-82df-43fe2a70cc3a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:32:47 compute-0 nova_compute[117413]: 2025-10-08 16:32:47.971 2 DEBUG nova.compute.manager [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 08 16:32:47 compute-0 nova_compute[117413]: 2025-10-08 16:32:47.973 2 DEBUG nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 08 16:32:47 compute-0 nova_compute[117413]: 2025-10-08 16:32:47.973 2 INFO nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Creating image(s)
Oct 08 16:32:47 compute-0 nova_compute[117413]: 2025-10-08 16:32:47.974 2 DEBUG oslo_concurrency.lockutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Acquiring lock "/var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:32:47 compute-0 nova_compute[117413]: 2025-10-08 16:32:47.975 2 DEBUG oslo_concurrency.lockutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Lock "/var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:32:47 compute-0 nova_compute[117413]: 2025-10-08 16:32:47.976 2 DEBUG oslo_concurrency.lockutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Lock "/var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:32:47 compute-0 nova_compute[117413]: 2025-10-08 16:32:47.977 2 DEBUG oslo_utils.imageutils.format_inspector [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:32:47 compute-0 nova_compute[117413]: 2025-10-08 16:32:47.983 2 DEBUG oslo_utils.imageutils.format_inspector [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:32:47 compute-0 nova_compute[117413]: 2025-10-08 16:32:47.985 2 DEBUG oslo_concurrency.processutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.067 2 DEBUG oslo_concurrency.processutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.068 2 DEBUG oslo_concurrency.lockutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.070 2 DEBUG oslo_concurrency.lockutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.071 2 DEBUG oslo_utils.imageutils.format_inspector [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.078 2 DEBUG oslo_utils.imageutils.format_inspector [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.079 2 DEBUG oslo_concurrency.processutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.140 2 DEBUG oslo_concurrency.processutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.142 2 DEBUG oslo_concurrency.processutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.201 2 DEBUG oslo_concurrency.processutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk 1073741824" returned: 0 in 0.059s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.202 2 DEBUG oslo_concurrency.lockutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.133s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.203 2 DEBUG oslo_concurrency.processutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.263 2 DEBUG oslo_concurrency.processutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.265 2 DEBUG nova.virt.disk.api [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Checking if we can resize image /var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.265 2 DEBUG oslo_concurrency.processutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.328 2 DEBUG oslo_concurrency.processutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.330 2 DEBUG nova.virt.disk.api [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Cannot resize image /var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.331 2 DEBUG nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.332 2 DEBUG nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Ensure instance console log exists: /var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.333 2 DEBUG oslo_concurrency.lockutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.333 2 DEBUG oslo_concurrency.lockutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.334 2 DEBUG oslo_concurrency.lockutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.458 2 DEBUG oslo_concurrency.lockutils [req-dbd9adc2-892b-4701-9914-8c3bbf9b4edb req-20fa2e11-f1b6-4cd9-82df-43fe2a70cc3a c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-214aa907-3edf-42c3-ac24-36294893f9df" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.459 2 DEBUG oslo_concurrency.lockutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Acquired lock "refresh_cache-214aa907-3edf-42c3-ac24-36294893f9df" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:32:48 compute-0 nova_compute[117413]: 2025-10-08 16:32:48.460 2 DEBUG nova.network.neutron [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.075 2 DEBUG nova.network.neutron [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.266 2 WARNING neutronclient.v2_0.client [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.444 2 DEBUG nova.network.neutron [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Updating instance_info_cache with network_info: [{"id": "986fc122-b6de-499e-83b7-12ae4247a345", "address": "fa:16:3e:71:1a:d2", "network": {"id": "5952b311-868c-4aad-8037-80ac85c56954", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-839506186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9cc61fc52354bf197dc66c86673dfe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap986fc122-b6", "ovs_interfaceid": "986fc122-b6de-499e-83b7-12ae4247a345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.957 2 DEBUG oslo_concurrency.lockutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Releasing lock "refresh_cache-214aa907-3edf-42c3-ac24-36294893f9df" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.958 2 DEBUG nova.compute.manager [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Instance network_info: |[{"id": "986fc122-b6de-499e-83b7-12ae4247a345", "address": "fa:16:3e:71:1a:d2", "network": {"id": "5952b311-868c-4aad-8037-80ac85c56954", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-839506186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9cc61fc52354bf197dc66c86673dfe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap986fc122-b6", "ovs_interfaceid": "986fc122-b6de-499e-83b7-12ae4247a345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.962 2 DEBUG nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Start _get_guest_xml network_info=[{"id": "986fc122-b6de-499e-83b7-12ae4247a345", "address": "fa:16:3e:71:1a:d2", "network": {"id": "5952b311-868c-4aad-8037-80ac85c56954", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-839506186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9cc61fc52354bf197dc66c86673dfe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap986fc122-b6", "ovs_interfaceid": "986fc122-b6de-499e-83b7-12ae4247a345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'boot_index': 0, 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': '44390e9d-4b05-4916-9ba9-97b19c79ef43'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.968 2 WARNING nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.970 2 DEBUG nova.virt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='44390e9d-4b05-4916-9ba9-97b19c79ef43', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteVmWorkloadBalanceStrategy-server-1386125514', uuid='214aa907-3edf-42c3-ac24-36294893f9df'), owner=OwnerMeta(userid='bdb98a9f1bf24f428912b8cdbbba458e', username='tempest-TestExecuteVmWorkloadBalanceStrategy-630971338-project-admin', projectid='ddb23534363a4dee8a87d68059ccced6', projectname='tempest-TestExecuteVmWorkloadBalanceStrategy-630971338'), image=ImageMeta(id='44390e9d-4b05-4916-9ba9-97b19c79ef43', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='43cd5d45-bd07-4889-a671-dd23291090c1', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "986fc122-b6de-499e-83b7-12ae4247a345", "address": "fa:16:3e:71:1a:d2", "network": {"id": "5952b311-868c-4aad-8037-80ac85c56954", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-839506186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9cc61fc52354bf197dc66c86673dfe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap986fc122-b6", "ovs_interfaceid": "986fc122-b6de-499e-83b7-12ae4247a345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008114656.23cad1d.el10', creation_time=1759941169.97044) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.980 2 DEBUG nova.virt.libvirt.host [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.981 2 DEBUG nova.virt.libvirt.host [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.988 2 DEBUG nova.virt.libvirt.host [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.989 2 DEBUG nova.virt.libvirt.host [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.990 2 DEBUG nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.990 2 DEBUG nova.virt.hardware [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-08T16:08:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='43cd5d45-bd07-4889-a671-dd23291090c1',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.990 2 DEBUG nova.virt.hardware [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.990 2 DEBUG nova.virt.hardware [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.991 2 DEBUG nova.virt.hardware [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.991 2 DEBUG nova.virt.hardware [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.991 2 DEBUG nova.virt.hardware [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.991 2 DEBUG nova.virt.hardware [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.991 2 DEBUG nova.virt.hardware [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.992 2 DEBUG nova.virt.hardware [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.992 2 DEBUG nova.virt.hardware [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.992 2 DEBUG nova.virt.hardware [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.996 2 DEBUG nova.virt.libvirt.vif [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:32:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteVmWorkloadBalanceStrategy-server-1386125514',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutevmworkloadbalancestrategy-server-1386125514',id=21,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddb23534363a4dee8a87d68059ccced6',ramdisk_id='',reservation_id='r-kfwyy0pv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteVmWorkloadBalanceStrategy-630971338',owner_user_name='tempest-TestExecuteVmWorkloadBalanceStrategy-630971338-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:32:46Z,user_data=None,user_id='bdb98a9f1bf24f428912b8cdbbba458e',uuid=214aa907-3edf-42c3-ac24-36294893f9df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "986fc122-b6de-499e-83b7-12ae4247a345", "address": "fa:16:3e:71:1a:d2", "network": {"id": "5952b311-868c-4aad-8037-80ac85c56954", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-839506186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9cc61fc52354bf197dc66c86673dfe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap986fc122-b6", "ovs_interfaceid": "986fc122-b6de-499e-83b7-12ae4247a345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.996 2 DEBUG nova.network.os_vif_util [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Converting VIF {"id": "986fc122-b6de-499e-83b7-12ae4247a345", "address": "fa:16:3e:71:1a:d2", "network": {"id": "5952b311-868c-4aad-8037-80ac85c56954", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-839506186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9cc61fc52354bf197dc66c86673dfe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap986fc122-b6", "ovs_interfaceid": "986fc122-b6de-499e-83b7-12ae4247a345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.997 2 DEBUG nova.network.os_vif_util [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:1a:d2,bridge_name='br-int',has_traffic_filtering=True,id=986fc122-b6de-499e-83b7-12ae4247a345,network=Network(5952b311-868c-4aad-8037-80ac85c56954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap986fc122-b6') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:32:49 compute-0 nova_compute[117413]: 2025-10-08 16:32:49.997 2 DEBUG nova.objects.instance [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 214aa907-3edf-42c3-ac24-36294893f9df obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.505 2 DEBUG nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] End _get_guest_xml xml=<domain type="kvm">
Oct 08 16:32:50 compute-0 nova_compute[117413]:   <uuid>214aa907-3edf-42c3-ac24-36294893f9df</uuid>
Oct 08 16:32:50 compute-0 nova_compute[117413]:   <name>instance-00000015</name>
Oct 08 16:32:50 compute-0 nova_compute[117413]:   <memory>131072</memory>
Oct 08 16:32:50 compute-0 nova_compute[117413]:   <vcpu>1</vcpu>
Oct 08 16:32:50 compute-0 nova_compute[117413]:   <metadata>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <nova:package version="32.1.0-0.20251008114656.23cad1d.el10"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <nova:name>tempest-TestExecuteVmWorkloadBalanceStrategy-server-1386125514</nova:name>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <nova:creationTime>2025-10-08 16:32:49</nova:creationTime>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <nova:flavor name="m1.nano" id="43cd5d45-bd07-4889-a671-dd23291090c1">
Oct 08 16:32:50 compute-0 nova_compute[117413]:         <nova:memory>128</nova:memory>
Oct 08 16:32:50 compute-0 nova_compute[117413]:         <nova:disk>1</nova:disk>
Oct 08 16:32:50 compute-0 nova_compute[117413]:         <nova:swap>0</nova:swap>
Oct 08 16:32:50 compute-0 nova_compute[117413]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 16:32:50 compute-0 nova_compute[117413]:         <nova:vcpus>1</nova:vcpus>
Oct 08 16:32:50 compute-0 nova_compute[117413]:         <nova:extraSpecs>
Oct 08 16:32:50 compute-0 nova_compute[117413]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 08 16:32:50 compute-0 nova_compute[117413]:         </nova:extraSpecs>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       </nova:flavor>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <nova:image uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43">
Oct 08 16:32:50 compute-0 nova_compute[117413]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 08 16:32:50 compute-0 nova_compute[117413]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 08 16:32:50 compute-0 nova_compute[117413]:         <nova:minDisk>1</nova:minDisk>
Oct 08 16:32:50 compute-0 nova_compute[117413]:         <nova:minRam>0</nova:minRam>
Oct 08 16:32:50 compute-0 nova_compute[117413]:         <nova:properties>
Oct 08 16:32:50 compute-0 nova_compute[117413]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 08 16:32:50 compute-0 nova_compute[117413]:         </nova:properties>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       </nova:image>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <nova:owner>
Oct 08 16:32:50 compute-0 nova_compute[117413]:         <nova:user uuid="bdb98a9f1bf24f428912b8cdbbba458e">tempest-TestExecuteVmWorkloadBalanceStrategy-630971338-project-admin</nova:user>
Oct 08 16:32:50 compute-0 nova_compute[117413]:         <nova:project uuid="ddb23534363a4dee8a87d68059ccced6">tempest-TestExecuteVmWorkloadBalanceStrategy-630971338</nova:project>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       </nova:owner>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <nova:root type="image" uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <nova:ports>
Oct 08 16:32:50 compute-0 nova_compute[117413]:         <nova:port uuid="986fc122-b6de-499e-83b7-12ae4247a345">
Oct 08 16:32:50 compute-0 nova_compute[117413]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:         </nova:port>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       </nova:ports>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     </nova:instance>
Oct 08 16:32:50 compute-0 nova_compute[117413]:   </metadata>
Oct 08 16:32:50 compute-0 nova_compute[117413]:   <sysinfo type="smbios">
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <system>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <entry name="manufacturer">RDO</entry>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <entry name="product">OpenStack Compute</entry>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <entry name="version">32.1.0-0.20251008114656.23cad1d.el10</entry>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <entry name="serial">214aa907-3edf-42c3-ac24-36294893f9df</entry>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <entry name="uuid">214aa907-3edf-42c3-ac24-36294893f9df</entry>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <entry name="family">Virtual Machine</entry>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     </system>
Oct 08 16:32:50 compute-0 nova_compute[117413]:   </sysinfo>
Oct 08 16:32:50 compute-0 nova_compute[117413]:   <os>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <boot dev="hd"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <smbios mode="sysinfo"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:   </os>
Oct 08 16:32:50 compute-0 nova_compute[117413]:   <features>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <acpi/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <apic/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <vmcoreinfo/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:   </features>
Oct 08 16:32:50 compute-0 nova_compute[117413]:   <clock offset="utc">
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <timer name="hpet" present="no"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:   </clock>
Oct 08 16:32:50 compute-0 nova_compute[117413]:   <cpu mode="host-model" match="exact">
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <topology sockets="1" cores="1" threads="1"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:32:50 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <disk type="file" device="disk">
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <target dev="vda" bus="virtio"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <disk type="file" device="cdrom">
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <driver name="qemu" type="raw" cache="none"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk.config"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <target dev="sda" bus="sata"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <interface type="ethernet">
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <mac address="fa:16:3e:71:1a:d2"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <driver name="vhost" rx_queue_size="512"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <mtu size="1442"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <target dev="tap986fc122-b6"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     </interface>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <serial type="pty">
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/console.log" append="off"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     </serial>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <video>
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     </video>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <input type="tablet" bus="usb"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <rng model="virtio">
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <backend model="random">/dev/urandom</backend>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <controller type="usb" index="0"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 08 16:32:50 compute-0 nova_compute[117413]:       <stats period="10"/>
Oct 08 16:32:50 compute-0 nova_compute[117413]:     </memballoon>
Oct 08 16:32:50 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:32:50 compute-0 nova_compute[117413]: </domain>
Oct 08 16:32:50 compute-0 nova_compute[117413]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.506 2 DEBUG nova.compute.manager [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Preparing to wait for external event network-vif-plugged-986fc122-b6de-499e-83b7-12ae4247a345 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.507 2 DEBUG oslo_concurrency.lockutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Acquiring lock "214aa907-3edf-42c3-ac24-36294893f9df-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.507 2 DEBUG oslo_concurrency.lockutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.507 2 DEBUG oslo_concurrency.lockutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.508 2 DEBUG nova.virt.libvirt.vif [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:32:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteVmWorkloadBalanceStrategy-server-1386125514',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutevmworkloadbalancestrategy-server-1386125514',id=21,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddb23534363a4dee8a87d68059ccced6',ramdisk_id='',reservation_id='r-kfwyy0pv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteVmWorkloadBalanceStrategy-630971338',owner_user_name='tempest-TestExecuteVmWorkloadBalanceStrategy-630971338-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:32:46Z,user_data=None,user_id='bdb98a9f1bf24f428912b8cdbbba458e',uuid=214aa907-3edf-42c3-ac24-36294893f9df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "986fc122-b6de-499e-83b7-12ae4247a345", "address": "fa:16:3e:71:1a:d2", "network": {"id": "5952b311-868c-4aad-8037-80ac85c56954", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-839506186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9cc61fc52354bf197dc66c86673dfe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap986fc122-b6", "ovs_interfaceid": "986fc122-b6de-499e-83b7-12ae4247a345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.508 2 DEBUG nova.network.os_vif_util [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Converting VIF {"id": "986fc122-b6de-499e-83b7-12ae4247a345", "address": "fa:16:3e:71:1a:d2", "network": {"id": "5952b311-868c-4aad-8037-80ac85c56954", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-839506186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9cc61fc52354bf197dc66c86673dfe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap986fc122-b6", "ovs_interfaceid": "986fc122-b6de-499e-83b7-12ae4247a345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.509 2 DEBUG nova.network.os_vif_util [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:1a:d2,bridge_name='br-int',has_traffic_filtering=True,id=986fc122-b6de-499e-83b7-12ae4247a345,network=Network(5952b311-868c-4aad-8037-80ac85c56954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap986fc122-b6') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.509 2 DEBUG os_vif [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:1a:d2,bridge_name='br-int',has_traffic_filtering=True,id=986fc122-b6de-499e-83b7-12ae4247a345,network=Network(5952b311-868c-4aad-8037-80ac85c56954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap986fc122-b6') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.510 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.510 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.511 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '5087b117-2557-5c18-9cc5-91633cfb5d1f', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.516 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap986fc122-b6, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap986fc122-b6, col_values=(('qos', UUID('2b5fe258-fd35-4fe8-8fbf-2d8c2df6e3c8')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap986fc122-b6, col_values=(('external_ids', {'iface-id': '986fc122-b6de-499e-83b7-12ae4247a345', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:71:1a:d2', 'vm-uuid': '214aa907-3edf-42c3-ac24-36294893f9df'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:32:50 compute-0 NetworkManager[1034]: <info>  [1759941170.5197] manager: (tap986fc122-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:50 compute-0 nova_compute[117413]: 2025-10-08 16:32:50.525 2 INFO os_vif [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:1a:d2,bridge_name='br-int',has_traffic_filtering=True,id=986fc122-b6de-499e-83b7-12ae4247a345,network=Network(5952b311-868c-4aad-8037-80ac85c56954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap986fc122-b6')
Oct 08 16:32:51 compute-0 nova_compute[117413]: 2025-10-08 16:32:51.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:52 compute-0 nova_compute[117413]: 2025-10-08 16:32:52.063 2 DEBUG nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:32:52 compute-0 nova_compute[117413]: 2025-10-08 16:32:52.064 2 DEBUG nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:32:52 compute-0 nova_compute[117413]: 2025-10-08 16:32:52.064 2 DEBUG nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] No VIF found with MAC fa:16:3e:71:1a:d2, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 08 16:32:52 compute-0 nova_compute[117413]: 2025-10-08 16:32:52.065 2 INFO nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Using config drive
Oct 08 16:32:52 compute-0 podman[149023]: 2025-10-08 16:32:52.453066091 +0000 UTC m=+0.065270835 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 08 16:32:52 compute-0 nova_compute[117413]: 2025-10-08 16:32:52.575 2 WARNING neutronclient.v2_0.client [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:32:53 compute-0 nova_compute[117413]: 2025-10-08 16:32:53.845 2 INFO nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Creating config drive at /var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk.config
Oct 08 16:32:53 compute-0 nova_compute[117413]: 2025-10-08 16:32:53.857 2 DEBUG oslo_concurrency.processutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmprdlhg9zp execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:32:53 compute-0 nova_compute[117413]: 2025-10-08 16:32:53.994 2 DEBUG oslo_concurrency.processutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmprdlhg9zp" returned: 0 in 0.137s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:32:54 compute-0 kernel: tap986fc122-b6: entered promiscuous mode
Oct 08 16:32:54 compute-0 NetworkManager[1034]: <info>  [1759941174.0842] manager: (tap986fc122-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Oct 08 16:32:54 compute-0 ovn_controller[19768]: 2025-10-08T16:32:54Z|00171|binding|INFO|Claiming lport 986fc122-b6de-499e-83b7-12ae4247a345 for this chassis.
Oct 08 16:32:54 compute-0 ovn_controller[19768]: 2025-10-08T16:32:54Z|00172|binding|INFO|986fc122-b6de-499e-83b7-12ae4247a345: Claiming fa:16:3e:71:1a:d2 10.100.0.9
Oct 08 16:32:54 compute-0 nova_compute[117413]: 2025-10-08 16:32:54.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:54 compute-0 nova_compute[117413]: 2025-10-08 16:32:54.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.103 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:1a:d2 10.100.0.9'], port_security=['fa:16:3e:71:1a:d2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '214aa907-3edf-42c3-ac24-36294893f9df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5952b311-868c-4aad-8037-80ac85c56954', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddb23534363a4dee8a87d68059ccced6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '59d0e94f-bb7e-4b50-a2ba-011426feae4d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5b730dc6-0991-46d6-9b8c-1ba4ba734d3f, chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=986fc122-b6de-499e-83b7-12ae4247a345) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.105 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 986fc122-b6de-499e-83b7-12ae4247a345 in datapath 5952b311-868c-4aad-8037-80ac85c56954 bound to our chassis
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.107 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5952b311-868c-4aad-8037-80ac85c56954
Oct 08 16:32:54 compute-0 systemd-udevd[149059]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.123 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[c9746ad7-1de7-4b93-b536-bd8737218413]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.124 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5952b311-81 in ovnmeta-5952b311-868c-4aad-8037-80ac85c56954 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.126 139805 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5952b311-80 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.126 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[52ac91a9-b069-4406-baa6-5eab6506a81a]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.128 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e9ed35-e73a-4c5e-9d24-60182e68a1ba]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.138 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[89a99bdc-6eee-4b9b-a459-148126041561]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:32:54 compute-0 NetworkManager[1034]: <info>  [1759941174.1423] device (tap986fc122-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:32:54 compute-0 NetworkManager[1034]: <info>  [1759941174.1435] device (tap986fc122-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:32:54 compute-0 nova_compute[117413]: 2025-10-08 16:32:54.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.147 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[a82d93ff-3fde-4742-8e4d-b94c7fc5dc6f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:32:54 compute-0 systemd-machined[77548]: New machine qemu-16-instance-00000015.
Oct 08 16:32:54 compute-0 ovn_controller[19768]: 2025-10-08T16:32:54Z|00173|binding|INFO|Setting lport 986fc122-b6de-499e-83b7-12ae4247a345 ovn-installed in OVS
Oct 08 16:32:54 compute-0 ovn_controller[19768]: 2025-10-08T16:32:54Z|00174|binding|INFO|Setting lport 986fc122-b6de-499e-83b7-12ae4247a345 up in Southbound
Oct 08 16:32:54 compute-0 nova_compute[117413]: 2025-10-08 16:32:54.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:54 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-00000015.
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.187 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[1cbb4451-017f-4bf1-9c93-3aa80a709ac1]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:32:54 compute-0 NetworkManager[1034]: <info>  [1759941174.1947] manager: (tap5952b311-80): new Veth device (/org/freedesktop/NetworkManager/Devices/68)
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.194 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e4b8a070-e61b-40fe-a2b9-e6f609cf2f72]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:32:54 compute-0 systemd-udevd[149065]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.232 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[6aa29303-3908-4dea-b12b-2aa435e08c26]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.236 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[d100a467-c9fd-4148-b5b1-15fd8c35da59]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:32:54 compute-0 NetworkManager[1034]: <info>  [1759941174.2649] device (tap5952b311-80): carrier: link connected
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.270 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c988cd-1739-4589-9b17-4bd6eba6cc02]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.290 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[297dbe18-e36b-448b-b0f4-a5ded4dd2c47]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5952b311-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:30:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 246295, 'reachable_time': 41120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 149094, 'error': None, 'target': 'ovnmeta-5952b311-868c-4aad-8037-80ac85c56954', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.307 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[af587191-a93d-4b95-89dc-4cec6d6c6593]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe64:3098'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 246295, 'tstamp': 246295}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 149095, 'error': None, 'target': 'ovnmeta-5952b311-868c-4aad-8037-80ac85c56954', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.328 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[468268ca-058e-476d-8e31-79b25533df93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5952b311-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:30:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 246295, 'reachable_time': 41120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 149096, 'error': None, 'target': 'ovnmeta-5952b311-868c-4aad-8037-80ac85c56954', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.373 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[f15f74ef-5a7d-4aa7-a172-cca1c65fe1a5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.448 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[feec6c37-486b-4526-8e35-5bf1551d4f67]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.450 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5952b311-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.451 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.451 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5952b311-80, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:32:54 compute-0 nova_compute[117413]: 2025-10-08 16:32:54.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:54 compute-0 NetworkManager[1034]: <info>  [1759941174.4544] manager: (tap5952b311-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Oct 08 16:32:54 compute-0 kernel: tap5952b311-80: entered promiscuous mode
Oct 08 16:32:54 compute-0 nova_compute[117413]: 2025-10-08 16:32:54.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.457 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5952b311-80, col_values=(('external_ids', {'iface-id': '728e9503-a577-402b-9f9a-3eb90576507b'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:32:54 compute-0 nova_compute[117413]: 2025-10-08 16:32:54.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:54 compute-0 ovn_controller[19768]: 2025-10-08T16:32:54Z|00175|binding|INFO|Releasing lport 728e9503-a577-402b-9f9a-3eb90576507b from this chassis (sb_readonly=0)
Oct 08 16:32:54 compute-0 nova_compute[117413]: 2025-10-08 16:32:54.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.475 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[de46d455-3d7d-4b31-9d60-ab0e29367114]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.476 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5952b311-868c-4aad-8037-80ac85c56954.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5952b311-868c-4aad-8037-80ac85c56954.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.476 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5952b311-868c-4aad-8037-80ac85c56954.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5952b311-868c-4aad-8037-80ac85c56954.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.477 28633 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 5952b311-868c-4aad-8037-80ac85c56954 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.477 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5952b311-868c-4aad-8037-80ac85c56954.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5952b311-868c-4aad-8037-80ac85c56954.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.478 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc1a906-0396-43e0-87bd-8fd3a8815548]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.478 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5952b311-868c-4aad-8037-80ac85c56954.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5952b311-868c-4aad-8037-80ac85c56954.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.479 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[c640e5bd-88d5-45c8-8d8d-7b86fc485e79]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.479 28633 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: global
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     log         /dev/log local0 debug
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     log-tag     haproxy-metadata-proxy-5952b311-868c-4aad-8037-80ac85c56954
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     user        root
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     group       root
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     maxconn     1024
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     pidfile     /var/lib/neutron/external/pids/5952b311-868c-4aad-8037-80ac85c56954.pid.haproxy
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     daemon
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: defaults
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     log global
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     mode http
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     option httplog
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     option dontlognull
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     option http-server-close
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     option forwardfor
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     retries                 3
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     timeout http-request    30s
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     timeout connect         30s
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     timeout client          32s
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     timeout server          32s
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     timeout http-keep-alive 30s
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: listen listener
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     bind 169.254.169.254:80
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     server metadata /var/lib/neutron/metadata_proxy
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:     http-request add-header X-OVN-Network-ID 5952b311-868c-4aad-8037-80ac85c56954
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 08 16:32:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:32:54.480 28633 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5952b311-868c-4aad-8037-80ac85c56954', 'env', 'PROCESS_TAG=haproxy-5952b311-868c-4aad-8037-80ac85c56954', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5952b311-868c-4aad-8037-80ac85c56954.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 08 16:32:54 compute-0 podman[149135]: 2025-10-08 16:32:54.917430281 +0000 UTC m=+0.060695833 container create 8f20e5d2e634224ebf429d7e1be1a76b408600b5333d8f6dcfac86dc9c733b05 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-5952b311-868c-4aad-8037-80ac85c56954, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 16:32:54 compute-0 systemd[1]: Started libpod-conmon-8f20e5d2e634224ebf429d7e1be1a76b408600b5333d8f6dcfac86dc9c733b05.scope.
Oct 08 16:32:54 compute-0 podman[149135]: 2025-10-08 16:32:54.884072853 +0000 UTC m=+0.027338395 image pull 1b705be0a2473f9551d4f3571c1e8fc1b0bd84e013684239de53078e70a4b6e3 38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 08 16:32:54 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63093c5c18c69ccaa1b113641a3fc2c00328ab4f012557a6ad470c2cde2f16b8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 08 16:32:55 compute-0 podman[149135]: 2025-10-08 16:32:55.006295551 +0000 UTC m=+0.149561093 container init 8f20e5d2e634224ebf429d7e1be1a76b408600b5333d8f6dcfac86dc9c733b05 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-5952b311-868c-4aad-8037-80ac85c56954, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest)
Oct 08 16:32:55 compute-0 podman[149135]: 2025-10-08 16:32:55.016978808 +0000 UTC m=+0.160244330 container start 8f20e5d2e634224ebf429d7e1be1a76b408600b5333d8f6dcfac86dc9c733b05 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-5952b311-868c-4aad-8037-80ac85c56954, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007)
Oct 08 16:32:55 compute-0 neutron-haproxy-ovnmeta-5952b311-868c-4aad-8037-80ac85c56954[149151]: [NOTICE]   (149155) : New worker (149157) forked
Oct 08 16:32:55 compute-0 neutron-haproxy-ovnmeta-5952b311-868c-4aad-8037-80ac85c56954[149151]: [NOTICE]   (149155) : Loading success.
Oct 08 16:32:55 compute-0 nova_compute[117413]: 2025-10-08 16:32:55.218 2 DEBUG nova.compute.manager [req-604ea212-ae47-4f64-8d7d-e8de51a83169 req-4f4e6c8f-6733-432c-b93f-880a45490c59 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Received event network-vif-plugged-986fc122-b6de-499e-83b7-12ae4247a345 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:32:55 compute-0 nova_compute[117413]: 2025-10-08 16:32:55.220 2 DEBUG oslo_concurrency.lockutils [req-604ea212-ae47-4f64-8d7d-e8de51a83169 req-4f4e6c8f-6733-432c-b93f-880a45490c59 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "214aa907-3edf-42c3-ac24-36294893f9df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:32:55 compute-0 nova_compute[117413]: 2025-10-08 16:32:55.221 2 DEBUG oslo_concurrency.lockutils [req-604ea212-ae47-4f64-8d7d-e8de51a83169 req-4f4e6c8f-6733-432c-b93f-880a45490c59 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:32:55 compute-0 nova_compute[117413]: 2025-10-08 16:32:55.221 2 DEBUG oslo_concurrency.lockutils [req-604ea212-ae47-4f64-8d7d-e8de51a83169 req-4f4e6c8f-6733-432c-b93f-880a45490c59 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:32:55 compute-0 nova_compute[117413]: 2025-10-08 16:32:55.221 2 DEBUG nova.compute.manager [req-604ea212-ae47-4f64-8d7d-e8de51a83169 req-4f4e6c8f-6733-432c-b93f-880a45490c59 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Processing event network-vif-plugged-986fc122-b6de-499e-83b7-12ae4247a345 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 08 16:32:55 compute-0 nova_compute[117413]: 2025-10-08 16:32:55.222 2 DEBUG nova.compute.manager [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 08 16:32:55 compute-0 nova_compute[117413]: 2025-10-08 16:32:55.228 2 DEBUG nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 08 16:32:55 compute-0 nova_compute[117413]: 2025-10-08 16:32:55.232 2 INFO nova.virt.libvirt.driver [-] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Instance spawned successfully.
Oct 08 16:32:55 compute-0 nova_compute[117413]: 2025-10-08 16:32:55.232 2 DEBUG nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 08 16:32:55 compute-0 nova_compute[117413]: 2025-10-08 16:32:55.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:55 compute-0 nova_compute[117413]: 2025-10-08 16:32:55.749 2 DEBUG nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:32:55 compute-0 nova_compute[117413]: 2025-10-08 16:32:55.750 2 DEBUG nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:32:55 compute-0 nova_compute[117413]: 2025-10-08 16:32:55.751 2 DEBUG nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:32:55 compute-0 nova_compute[117413]: 2025-10-08 16:32:55.752 2 DEBUG nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:32:55 compute-0 nova_compute[117413]: 2025-10-08 16:32:55.753 2 DEBUG nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:32:55 compute-0 nova_compute[117413]: 2025-10-08 16:32:55.754 2 DEBUG nova.virt.libvirt.driver [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:32:56 compute-0 nova_compute[117413]: 2025-10-08 16:32:56.265 2 INFO nova.compute.manager [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Took 8.29 seconds to spawn the instance on the hypervisor.
Oct 08 16:32:56 compute-0 nova_compute[117413]: 2025-10-08 16:32:56.266 2 DEBUG nova.compute.manager [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:32:56 compute-0 nova_compute[117413]: 2025-10-08 16:32:56.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:32:56 compute-0 nova_compute[117413]: 2025-10-08 16:32:56.798 2 INFO nova.compute.manager [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Took 13.52 seconds to build instance.
Oct 08 16:32:57 compute-0 nova_compute[117413]: 2025-10-08 16:32:57.285 2 DEBUG nova.compute.manager [req-911247e5-8913-4652-867f-ea4d3b63a484 req-a0dd229a-264d-42cb-9b96-39895ad506b0 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Received event network-vif-plugged-986fc122-b6de-499e-83b7-12ae4247a345 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:32:57 compute-0 nova_compute[117413]: 2025-10-08 16:32:57.286 2 DEBUG oslo_concurrency.lockutils [req-911247e5-8913-4652-867f-ea4d3b63a484 req-a0dd229a-264d-42cb-9b96-39895ad506b0 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "214aa907-3edf-42c3-ac24-36294893f9df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:32:57 compute-0 nova_compute[117413]: 2025-10-08 16:32:57.286 2 DEBUG oslo_concurrency.lockutils [req-911247e5-8913-4652-867f-ea4d3b63a484 req-a0dd229a-264d-42cb-9b96-39895ad506b0 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:32:57 compute-0 nova_compute[117413]: 2025-10-08 16:32:57.287 2 DEBUG oslo_concurrency.lockutils [req-911247e5-8913-4652-867f-ea4d3b63a484 req-a0dd229a-264d-42cb-9b96-39895ad506b0 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:32:57 compute-0 nova_compute[117413]: 2025-10-08 16:32:57.287 2 DEBUG nova.compute.manager [req-911247e5-8913-4652-867f-ea4d3b63a484 req-a0dd229a-264d-42cb-9b96-39895ad506b0 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] No waiting events found dispatching network-vif-plugged-986fc122-b6de-499e-83b7-12ae4247a345 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:32:57 compute-0 nova_compute[117413]: 2025-10-08 16:32:57.288 2 WARNING nova.compute.manager [req-911247e5-8913-4652-867f-ea4d3b63a484 req-a0dd229a-264d-42cb-9b96-39895ad506b0 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Received unexpected event network-vif-plugged-986fc122-b6de-499e-83b7-12ae4247a345 for instance with vm_state active and task_state None.
Oct 08 16:32:57 compute-0 nova_compute[117413]: 2025-10-08 16:32:57.303 2 DEBUG oslo_concurrency.lockutils [None req-e6d5281f-e73c-4693-8a3f-6ac03a655f32 bdb98a9f1bf24f428912b8cdbbba458e ddb23534363a4dee8a87d68059ccced6 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.042s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:32:57 compute-0 podman[149166]: 2025-10-08 16:32:57.484584309 +0000 UTC m=+0.077890276 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, maintainer=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public)
Oct 08 16:32:59 compute-0 podman[127881]: time="2025-10-08T16:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:32:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:32:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3488 "" "Go-http-client/1.1"
Oct 08 16:33:00 compute-0 nova_compute[117413]: 2025-10-08 16:33:00.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:01 compute-0 openstack_network_exporter[130039]: ERROR   16:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:33:01 compute-0 openstack_network_exporter[130039]: ERROR   16:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:33:01 compute-0 openstack_network_exporter[130039]: ERROR   16:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:33:01 compute-0 openstack_network_exporter[130039]: ERROR   16:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:33:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:33:01 compute-0 openstack_network_exporter[130039]: ERROR   16:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:33:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:33:01 compute-0 nova_compute[117413]: 2025-10-08 16:33:01.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:03 compute-0 podman[149187]: 2025-10-08 16:33:03.465274849 +0000 UTC m=+0.066577982 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=iscsid, config_id=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 16:33:05 compute-0 nova_compute[117413]: 2025-10-08 16:33:05.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:06 compute-0 ovn_controller[19768]: 2025-10-08T16:33:06Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:71:1a:d2 10.100.0.9
Oct 08 16:33:06 compute-0 ovn_controller[19768]: 2025-10-08T16:33:06Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:71:1a:d2 10.100.0.9
Oct 08 16:33:06 compute-0 podman[149221]: 2025-10-08 16:33:06.465954921 +0000 UTC m=+0.058303873 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 08 16:33:06 compute-0 nova_compute[117413]: 2025-10-08 16:33:06.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:06 compute-0 nova_compute[117413]: 2025-10-08 16:33:06.868 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:33:07 compute-0 nova_compute[117413]: 2025-10-08 16:33:07.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:33:07 compute-0 nova_compute[117413]: 2025-10-08 16:33:07.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:33:10 compute-0 nova_compute[117413]: 2025-10-08 16:33:10.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:33:10 compute-0 nova_compute[117413]: 2025-10-08 16:33:10.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:33:10 compute-0 nova_compute[117413]: 2025-10-08 16:33:10.364 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:33:10 compute-0 nova_compute[117413]: 2025-10-08 16:33:10.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:11 compute-0 nova_compute[117413]: 2025-10-08 16:33:11.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:33:11 compute-0 nova_compute[117413]: 2025-10-08 16:33:11.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:11 compute-0 nova_compute[117413]: 2025-10-08 16:33:11.875 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:33:11 compute-0 nova_compute[117413]: 2025-10-08 16:33:11.876 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:33:11 compute-0 nova_compute[117413]: 2025-10-08 16:33:11.876 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:33:11 compute-0 nova_compute[117413]: 2025-10-08 16:33:11.876 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:33:12 compute-0 podman[149240]: 2025-10-08 16:33:12.013006046 +0000 UTC m=+0.101744791 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 08 16:33:12 compute-0 podman[149242]: 2025-10-08 16:33:12.039735103 +0000 UTC m=+0.114549968 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible)
Oct 08 16:33:12 compute-0 nova_compute[117413]: 2025-10-08 16:33:12.920 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:33:12 compute-0 nova_compute[117413]: 2025-10-08 16:33:12.972 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:33:12 compute-0 nova_compute[117413]: 2025-10-08 16:33:12.973 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:33:13 compute-0 nova_compute[117413]: 2025-10-08 16:33:13.034 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:33:13 compute-0 nova_compute[117413]: 2025-10-08 16:33:13.198 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:33:13 compute-0 nova_compute[117413]: 2025-10-08 16:33:13.199 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:33:13 compute-0 nova_compute[117413]: 2025-10-08 16:33:13.243 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.044s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:33:13 compute-0 nova_compute[117413]: 2025-10-08 16:33:13.244 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5977MB free_disk=73.22189331054688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:33:13 compute-0 nova_compute[117413]: 2025-10-08 16:33:13.245 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:33:13 compute-0 nova_compute[117413]: 2025-10-08 16:33:13.245 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:33:14 compute-0 nova_compute[117413]: 2025-10-08 16:33:14.312 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance 214aa907-3edf-42c3-ac24-36294893f9df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:33:14 compute-0 nova_compute[117413]: 2025-10-08 16:33:14.313 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:33:14 compute-0 nova_compute[117413]: 2025-10-08 16:33:14.313 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:33:13 up 41 min,  0 user,  load average: 0.15, 0.18, 0.23\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_ddb23534363a4dee8a87d68059ccced6': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:33:14 compute-0 nova_compute[117413]: 2025-10-08 16:33:14.347 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:33:14 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 08 16:33:14 compute-0 nova_compute[117413]: 2025-10-08 16:33:14.857 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:33:15 compute-0 nova_compute[117413]: 2025-10-08 16:33:15.394 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:33:15 compute-0 nova_compute[117413]: 2025-10-08 16:33:15.394 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.149s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:33:15 compute-0 nova_compute[117413]: 2025-10-08 16:33:15.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:16 compute-0 nova_compute[117413]: 2025-10-08 16:33:16.394 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:33:16 compute-0 nova_compute[117413]: 2025-10-08 16:33:16.395 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:33:16 compute-0 nova_compute[117413]: 2025-10-08 16:33:16.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:19 compute-0 nova_compute[117413]: 2025-10-08 16:33:19.425 2 DEBUG nova.compute.manager [None req-bf4a83d3-886f-4088-9673-4d2a40125754 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Adding trait COMPUTE_STATUS_DISABLED to compute node resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 in placement. update_compute_provider_status /usr/lib/python3.12/site-packages/nova/compute/manager.py:635
Oct 08 16:33:19 compute-0 nova_compute[117413]: 2025-10-08 16:33:19.478 2 DEBUG nova.compute.provider_tree [None req-bf4a83d3-886f-4088-9673-4d2a40125754 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Updating resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 generation from 20 to 24 during operation: update_traits _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Oct 08 16:33:20 compute-0 nova_compute[117413]: 2025-10-08 16:33:20.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:21 compute-0 nova_compute[117413]: 2025-10-08 16:33:21.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:23 compute-0 podman[149300]: 2025-10-08 16:33:23.486884223 +0000 UTC m=+0.078341060 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:33:24 compute-0 ovn_controller[19768]: 2025-10-08T16:33:24Z|00176|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Oct 08 16:33:25 compute-0 nova_compute[117413]: 2025-10-08 16:33:25.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:26 compute-0 nova_compute[117413]: 2025-10-08 16:33:26.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:28 compute-0 podman[149321]: 2025-10-08 16:33:28.450014926 +0000 UTC m=+0.061158576 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, distribution-scope=public)
Oct 08 16:33:29 compute-0 podman[127881]: time="2025-10-08T16:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:33:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:33:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3494 "" "Go-http-client/1.1"
Oct 08 16:33:30 compute-0 nova_compute[117413]: 2025-10-08 16:33:30.019 2 DEBUG nova.virt.libvirt.driver [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Check if temp file /var/lib/nova/instances/tmp40b1os98 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10968
Oct 08 16:33:30 compute-0 nova_compute[117413]: 2025-10-08 16:33:30.028 2 DEBUG nova.compute.manager [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp40b1os98',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='214aa907-3edf-42c3-ac24-36294893f9df',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.12/site-packages/nova/compute/manager.py:9294
Oct 08 16:33:30 compute-0 nova_compute[117413]: 2025-10-08 16:33:30.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:31 compute-0 openstack_network_exporter[130039]: ERROR   16:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:33:31 compute-0 openstack_network_exporter[130039]: ERROR   16:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:33:31 compute-0 openstack_network_exporter[130039]: ERROR   16:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:33:31 compute-0 openstack_network_exporter[130039]: ERROR   16:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:33:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:33:31 compute-0 openstack_network_exporter[130039]: ERROR   16:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:33:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:33:31 compute-0 nova_compute[117413]: 2025-10-08 16:33:31.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:34 compute-0 nova_compute[117413]: 2025-10-08 16:33:34.391 2 DEBUG oslo_concurrency.processutils [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:33:34 compute-0 nova_compute[117413]: 2025-10-08 16:33:34.451 2 DEBUG oslo_concurrency.processutils [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:33:34 compute-0 nova_compute[117413]: 2025-10-08 16:33:34.452 2 DEBUG oslo_concurrency.processutils [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:33:34 compute-0 podman[149342]: 2025-10-08 16:33:34.495746263 +0000 UTC m=+0.088250804 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct 08 16:33:34 compute-0 nova_compute[117413]: 2025-10-08 16:33:34.551 2 DEBUG oslo_concurrency.processutils [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:33:34 compute-0 nova_compute[117413]: 2025-10-08 16:33:34.552 2 DEBUG nova.compute.manager [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Preparing to wait for external event network-vif-plugged-986fc122-b6de-499e-83b7-12ae4247a345 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 08 16:33:34 compute-0 nova_compute[117413]: 2025-10-08 16:33:34.553 2 DEBUG oslo_concurrency.lockutils [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "214aa907-3edf-42c3-ac24-36294893f9df-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:33:34 compute-0 nova_compute[117413]: 2025-10-08 16:33:34.553 2 DEBUG oslo_concurrency.lockutils [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:33:34 compute-0 nova_compute[117413]: 2025-10-08 16:33:34.553 2 DEBUG oslo_concurrency.lockutils [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:33:35 compute-0 nova_compute[117413]: 2025-10-08 16:33:35.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:36 compute-0 nova_compute[117413]: 2025-10-08 16:33:36.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:37 compute-0 podman[149368]: 2025-10-08 16:33:37.488543068 +0000 UTC m=+0.079813012 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007)
Oct 08 16:33:39 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:39.755 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:33:39 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:39.756 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:33:39 compute-0 nova_compute[117413]: 2025-10-08 16:33:39.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:39 compute-0 nova_compute[117413]: 2025-10-08 16:33:39.770 2 DEBUG nova.compute.manager [req-d4023ec3-d424-41d0-94c8-41411bdcc0c7 req-f5b54735-3f64-4f56-80a0-2bc02403eebe c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Received event network-vif-unplugged-986fc122-b6de-499e-83b7-12ae4247a345 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:33:39 compute-0 nova_compute[117413]: 2025-10-08 16:33:39.771 2 DEBUG oslo_concurrency.lockutils [req-d4023ec3-d424-41d0-94c8-41411bdcc0c7 req-f5b54735-3f64-4f56-80a0-2bc02403eebe c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "214aa907-3edf-42c3-ac24-36294893f9df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:33:39 compute-0 nova_compute[117413]: 2025-10-08 16:33:39.772 2 DEBUG oslo_concurrency.lockutils [req-d4023ec3-d424-41d0-94c8-41411bdcc0c7 req-f5b54735-3f64-4f56-80a0-2bc02403eebe c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:33:39 compute-0 nova_compute[117413]: 2025-10-08 16:33:39.772 2 DEBUG oslo_concurrency.lockutils [req-d4023ec3-d424-41d0-94c8-41411bdcc0c7 req-f5b54735-3f64-4f56-80a0-2bc02403eebe c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:33:39 compute-0 nova_compute[117413]: 2025-10-08 16:33:39.773 2 DEBUG nova.compute.manager [req-d4023ec3-d424-41d0-94c8-41411bdcc0c7 req-f5b54735-3f64-4f56-80a0-2bc02403eebe c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] No event matching network-vif-unplugged-986fc122-b6de-499e-83b7-12ae4247a345 in dict_keys([('network-vif-plugged', '986fc122-b6de-499e-83b7-12ae4247a345')]) pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:349
Oct 08 16:33:39 compute-0 nova_compute[117413]: 2025-10-08 16:33:39.773 2 DEBUG nova.compute.manager [req-d4023ec3-d424-41d0-94c8-41411bdcc0c7 req-f5b54735-3f64-4f56-80a0-2bc02403eebe c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Received event network-vif-unplugged-986fc122-b6de-499e-83b7-12ae4247a345 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:33:40 compute-0 nova_compute[117413]: 2025-10-08 16:33:40.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:41 compute-0 nova_compute[117413]: 2025-10-08 16:33:41.213 2 INFO nova.compute.manager [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Took 6.66 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Oct 08 16:33:41 compute-0 nova_compute[117413]: 2025-10-08 16:33:41.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:41 compute-0 nova_compute[117413]: 2025-10-08 16:33:41.819 2 DEBUG nova.compute.manager [req-9eb8c3ee-3b82-44de-8494-9d5d88372afc req-fade9c57-4c9b-4a5b-b811-76531f32b083 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Received event network-vif-plugged-986fc122-b6de-499e-83b7-12ae4247a345 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:33:41 compute-0 nova_compute[117413]: 2025-10-08 16:33:41.819 2 DEBUG oslo_concurrency.lockutils [req-9eb8c3ee-3b82-44de-8494-9d5d88372afc req-fade9c57-4c9b-4a5b-b811-76531f32b083 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "214aa907-3edf-42c3-ac24-36294893f9df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:33:41 compute-0 nova_compute[117413]: 2025-10-08 16:33:41.820 2 DEBUG oslo_concurrency.lockutils [req-9eb8c3ee-3b82-44de-8494-9d5d88372afc req-fade9c57-4c9b-4a5b-b811-76531f32b083 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:33:41 compute-0 nova_compute[117413]: 2025-10-08 16:33:41.820 2 DEBUG oslo_concurrency.lockutils [req-9eb8c3ee-3b82-44de-8494-9d5d88372afc req-fade9c57-4c9b-4a5b-b811-76531f32b083 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:33:41 compute-0 nova_compute[117413]: 2025-10-08 16:33:41.820 2 DEBUG nova.compute.manager [req-9eb8c3ee-3b82-44de-8494-9d5d88372afc req-fade9c57-4c9b-4a5b-b811-76531f32b083 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Processing event network-vif-plugged-986fc122-b6de-499e-83b7-12ae4247a345 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 08 16:33:41 compute-0 nova_compute[117413]: 2025-10-08 16:33:41.820 2 DEBUG nova.compute.manager [req-9eb8c3ee-3b82-44de-8494-9d5d88372afc req-fade9c57-4c9b-4a5b-b811-76531f32b083 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Received event network-changed-986fc122-b6de-499e-83b7-12ae4247a345 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:33:41 compute-0 nova_compute[117413]: 2025-10-08 16:33:41.820 2 DEBUG nova.compute.manager [req-9eb8c3ee-3b82-44de-8494-9d5d88372afc req-fade9c57-4c9b-4a5b-b811-76531f32b083 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Refreshing instance network info cache due to event network-changed-986fc122-b6de-499e-83b7-12ae4247a345. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 08 16:33:41 compute-0 nova_compute[117413]: 2025-10-08 16:33:41.820 2 DEBUG oslo_concurrency.lockutils [req-9eb8c3ee-3b82-44de-8494-9d5d88372afc req-fade9c57-4c9b-4a5b-b811-76531f32b083 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-214aa907-3edf-42c3-ac24-36294893f9df" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:33:41 compute-0 nova_compute[117413]: 2025-10-08 16:33:41.820 2 DEBUG oslo_concurrency.lockutils [req-9eb8c3ee-3b82-44de-8494-9d5d88372afc req-fade9c57-4c9b-4a5b-b811-76531f32b083 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-214aa907-3edf-42c3-ac24-36294893f9df" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:33:41 compute-0 nova_compute[117413]: 2025-10-08 16:33:41.821 2 DEBUG nova.network.neutron [req-9eb8c3ee-3b82-44de-8494-9d5d88372afc req-fade9c57-4c9b-4a5b-b811-76531f32b083 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Refreshing network info cache for port 986fc122-b6de-499e-83b7-12ae4247a345 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 08 16:33:41 compute-0 nova_compute[117413]: 2025-10-08 16:33:41.822 2 DEBUG nova.compute.manager [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 08 16:33:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:41.919 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:33:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:41.920 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:33:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:41.920 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:33:42 compute-0 nova_compute[117413]: 2025-10-08 16:33:42.328 2 WARNING neutronclient.v2_0.client [req-9eb8c3ee-3b82-44de-8494-9d5d88372afc req-fade9c57-4c9b-4a5b-b811-76531f32b083 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:33:42 compute-0 nova_compute[117413]: 2025-10-08 16:33:42.337 2 DEBUG nova.compute.manager [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp40b1os98',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='214aa907-3edf-42c3-ac24-36294893f9df',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(b741462d-fd63-46f8-afe4-bbf36492e407),old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9659
Oct 08 16:33:42 compute-0 podman[149390]: 2025-10-08 16:33:42.4550705 +0000 UTC m=+0.060912069 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:33:42 compute-0 podman[149391]: 2025-10-08 16:33:42.518393767 +0000 UTC m=+0.119893512 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 08 16:33:42 compute-0 nova_compute[117413]: 2025-10-08 16:33:42.856 2 DEBUG nova.objects.instance [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'migration_context' on Instance uuid 214aa907-3edf-42c3-ac24-36294893f9df obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:33:42 compute-0 nova_compute[117413]: 2025-10-08 16:33:42.857 2 DEBUG nova.virt.libvirt.driver [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Starting monitoring of live migration _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11543
Oct 08 16:33:42 compute-0 nova_compute[117413]: 2025-10-08 16:33:42.858 2 DEBUG nova.virt.libvirt.driver [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 08 16:33:42 compute-0 nova_compute[117413]: 2025-10-08 16:33:42.859 2 DEBUG nova.virt.libvirt.driver [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 08 16:33:43 compute-0 nova_compute[117413]: 2025-10-08 16:33:43.108 2 WARNING neutronclient.v2_0.client [req-9eb8c3ee-3b82-44de-8494-9d5d88372afc req-fade9c57-4c9b-4a5b-b811-76531f32b083 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:33:43 compute-0 nova_compute[117413]: 2025-10-08 16:33:43.285 2 DEBUG nova.network.neutron [req-9eb8c3ee-3b82-44de-8494-9d5d88372afc req-fade9c57-4c9b-4a5b-b811-76531f32b083 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Updated VIF entry in instance network info cache for port 986fc122-b6de-499e-83b7-12ae4247a345. _build_network_info_model /usr/lib/python3.12/site-packages/nova/network/neutron.py:3542
Oct 08 16:33:43 compute-0 nova_compute[117413]: 2025-10-08 16:33:43.285 2 DEBUG nova.network.neutron [req-9eb8c3ee-3b82-44de-8494-9d5d88372afc req-fade9c57-4c9b-4a5b-b811-76531f32b083 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Updating instance_info_cache with network_info: [{"id": "986fc122-b6de-499e-83b7-12ae4247a345", "address": "fa:16:3e:71:1a:d2", "network": {"id": "5952b311-868c-4aad-8037-80ac85c56954", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-839506186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9cc61fc52354bf197dc66c86673dfe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap986fc122-b6", "ovs_interfaceid": "986fc122-b6de-499e-83b7-12ae4247a345", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:33:43 compute-0 nova_compute[117413]: 2025-10-08 16:33:43.361 2 DEBUG nova.virt.libvirt.driver [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Operation thread is still running _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11343
Oct 08 16:33:43 compute-0 nova_compute[117413]: 2025-10-08 16:33:43.361 2 DEBUG nova.virt.libvirt.driver [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Migration not running yet _live_migration_monitor /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11352
Oct 08 16:33:43 compute-0 nova_compute[117413]: 2025-10-08 16:33:43.367 2 DEBUG nova.virt.libvirt.vif [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-08T16:32:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteVmWorkloadBalanceStrategy-server-1386125514',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutevmworkloadbalancestrategy-server-1386125514',id=21,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:32:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ddb23534363a4dee8a87d68059ccced6',ramdisk_id='',reservation_id='r-kfwyy0pv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteVmWorkloadBalanceStrategy-630971338',owner_user_name='tempest-TestExecuteVmWorkloadBalanceStrategy-630971338-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:32:56Z,user_data=None,user_id='bdb98a9f1bf24f428912b8cdbbba458e',uuid=214aa907-3edf-42c3-ac24-36294893f9df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "986fc122-b6de-499e-83b7-12ae4247a345", "address": "fa:16:3e:71:1a:d2", "network": {"id": "5952b311-868c-4aad-8037-80ac85c56954", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-839506186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9cc61fc52354bf197dc66c86673dfe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap986fc122-b6", "ovs_interfaceid": "986fc122-b6de-499e-83b7-12ae4247a345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 08 16:33:43 compute-0 nova_compute[117413]: 2025-10-08 16:33:43.367 2 DEBUG nova.network.os_vif_util [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converting VIF {"id": "986fc122-b6de-499e-83b7-12ae4247a345", "address": "fa:16:3e:71:1a:d2", "network": {"id": "5952b311-868c-4aad-8037-80ac85c56954", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-839506186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9cc61fc52354bf197dc66c86673dfe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap986fc122-b6", "ovs_interfaceid": "986fc122-b6de-499e-83b7-12ae4247a345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:33:43 compute-0 nova_compute[117413]: 2025-10-08 16:33:43.368 2 DEBUG nova.network.os_vif_util [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:1a:d2,bridge_name='br-int',has_traffic_filtering=True,id=986fc122-b6de-499e-83b7-12ae4247a345,network=Network(5952b311-868c-4aad-8037-80ac85c56954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap986fc122-b6') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:33:43 compute-0 nova_compute[117413]: 2025-10-08 16:33:43.368 2 DEBUG nova.virt.libvirt.migration [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Updating guest XML with vif config: <interface type="ethernet">
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <mac address="fa:16:3e:71:1a:d2"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <model type="virtio"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <driver name="vhost" rx_queue_size="512"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <mtu size="1442"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <target dev="tap986fc122-b6"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]: </interface>
Oct 08 16:33:43 compute-0 nova_compute[117413]:  _update_vif_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:534
Oct 08 16:33:43 compute-0 nova_compute[117413]: 2025-10-08 16:33:43.369 2 DEBUG nova.virt.libvirt.migration [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] _remove_cpu_shared_set_xml input xml=<domain type="kvm">
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <name>instance-00000015</name>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <uuid>214aa907-3edf-42c3-ac24-36294893f9df</uuid>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <metadata>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:package version="32.1.0-0.20251008114656.23cad1d.el10"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:name>tempest-TestExecuteVmWorkloadBalanceStrategy-server-1386125514</nova:name>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:creationTime>2025-10-08 16:32:49</nova:creationTime>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:flavor name="m1.nano" id="43cd5d45-bd07-4889-a671-dd23291090c1">
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:memory>128</nova:memory>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:disk>1</nova:disk>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:swap>0</nova:swap>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:vcpus>1</nova:vcpus>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:extraSpecs>
Oct 08 16:33:43 compute-0 nova_compute[117413]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         </nova:extraSpecs>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       </nova:flavor>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:image uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43">
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:minDisk>1</nova:minDisk>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:minRam>0</nova:minRam>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:properties>
Oct 08 16:33:43 compute-0 nova_compute[117413]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         </nova:properties>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       </nova:image>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:owner>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:user uuid="bdb98a9f1bf24f428912b8cdbbba458e">tempest-TestExecuteVmWorkloadBalanceStrategy-630971338-project-admin</nova:user>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:project uuid="ddb23534363a4dee8a87d68059ccced6">tempest-TestExecuteVmWorkloadBalanceStrategy-630971338</nova:project>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       </nova:owner>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:root type="image" uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:ports>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:port uuid="986fc122-b6de-499e-83b7-12ae4247a345">
Oct 08 16:33:43 compute-0 nova_compute[117413]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         </nova:port>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       </nova:ports>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </nova:instance>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </metadata>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <memory unit="KiB">131072</memory>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <vcpu placement="static">1</vcpu>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <resource>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <partition>/machine</partition>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </resource>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <sysinfo type="smbios">
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <system>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <entry name="manufacturer">RDO</entry>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <entry name="product">OpenStack Compute</entry>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <entry name="version">32.1.0-0.20251008114656.23cad1d.el10</entry>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <entry name="serial">214aa907-3edf-42c3-ac24-36294893f9df</entry>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <entry name="uuid">214aa907-3edf-42c3-ac24-36294893f9df</entry>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <entry name="family">Virtual Machine</entry>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </system>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </sysinfo>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <os>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <boot dev="hd"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <smbios mode="sysinfo"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </os>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <features>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <acpi/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <apic/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <vmcoreinfo state="on"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </features>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <cpu mode="host-model" check="partial">
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <clock offset="utc">
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <timer name="hpet" present="no"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </clock>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <on_poweroff>destroy</on_poweroff>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <on_reboot>restart</on_reboot>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <on_crash>destroy</on_crash>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <disk type="file" device="disk">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target dev="vda" bus="virtio"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <disk type="file" device="cdrom">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <driver name="qemu" type="raw" cache="none"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk.config"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target dev="sda" bus="sata"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <readonly/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="1" port="0x10"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="2" port="0x11"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="3" port="0x12"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="4" port="0x13"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="5" port="0x14"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="6" port="0x15"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="7" port="0x16"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="8" port="0x17"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="9" port="0x18"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="10" port="0x19"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="11" port="0x1a"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="12" port="0x1b"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="13" port="0x1c"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="14" port="0x1d"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="15" port="0x1e"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="16" port="0x1f"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="17" port="0x20"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="18" port="0x21"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="19" port="0x22"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="20" port="0x23"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="21" port="0x24"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="22" port="0x25"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="23" port="0x26"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="24" port="0x27"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="25" port="0x28"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-pci-bridge"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="sata" index="0">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <interface type="ethernet"><mac address="fa:16:3e:71:1a:d2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap986fc122-b6"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </interface><serial type="pty">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/console.log" append="off"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target type="isa-serial" port="0">
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <model name="isa-serial"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       </target>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </serial>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <console type="pty">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/console.log" append="off"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target type="serial" port="0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </console>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <input type="tablet" bus="usb">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="usb" bus="0" port="1"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </input>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <input type="mouse" bus="ps2"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <listen type="address" address="::"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </graphics>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <video>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model type="virtio" heads="1" primary="yes"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </video>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <stats period="10"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </memballoon>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <rng model="virtio">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <backend model="random">/dev/urandom</backend>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]: </domain>
Oct 08 16:33:43 compute-0 nova_compute[117413]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:241
Oct 08 16:33:43 compute-0 nova_compute[117413]: 2025-10-08 16:33:43.371 2 DEBUG nova.virt.libvirt.migration [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] _remove_cpu_shared_set_xml output xml=<domain type="kvm">
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <name>instance-00000015</name>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <uuid>214aa907-3edf-42c3-ac24-36294893f9df</uuid>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <metadata>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:package version="32.1.0-0.20251008114656.23cad1d.el10"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:name>tempest-TestExecuteVmWorkloadBalanceStrategy-server-1386125514</nova:name>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:creationTime>2025-10-08 16:32:49</nova:creationTime>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:flavor name="m1.nano" id="43cd5d45-bd07-4889-a671-dd23291090c1">
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:memory>128</nova:memory>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:disk>1</nova:disk>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:swap>0</nova:swap>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:vcpus>1</nova:vcpus>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:extraSpecs>
Oct 08 16:33:43 compute-0 nova_compute[117413]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         </nova:extraSpecs>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       </nova:flavor>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:image uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43">
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:minDisk>1</nova:minDisk>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:minRam>0</nova:minRam>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:properties>
Oct 08 16:33:43 compute-0 nova_compute[117413]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         </nova:properties>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       </nova:image>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:owner>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:user uuid="bdb98a9f1bf24f428912b8cdbbba458e">tempest-TestExecuteVmWorkloadBalanceStrategy-630971338-project-admin</nova:user>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:project uuid="ddb23534363a4dee8a87d68059ccced6">tempest-TestExecuteVmWorkloadBalanceStrategy-630971338</nova:project>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       </nova:owner>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:root type="image" uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:ports>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:port uuid="986fc122-b6de-499e-83b7-12ae4247a345">
Oct 08 16:33:43 compute-0 nova_compute[117413]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         </nova:port>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       </nova:ports>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </nova:instance>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </metadata>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <memory unit="KiB">131072</memory>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <vcpu placement="static">1</vcpu>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <resource>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <partition>/machine</partition>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </resource>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <sysinfo type="smbios">
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <system>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <entry name="manufacturer">RDO</entry>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <entry name="product">OpenStack Compute</entry>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <entry name="version">32.1.0-0.20251008114656.23cad1d.el10</entry>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <entry name="serial">214aa907-3edf-42c3-ac24-36294893f9df</entry>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <entry name="uuid">214aa907-3edf-42c3-ac24-36294893f9df</entry>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <entry name="family">Virtual Machine</entry>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </system>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </sysinfo>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <os>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <boot dev="hd"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <smbios mode="sysinfo"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </os>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <features>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <acpi/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <apic/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <vmcoreinfo state="on"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </features>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <cpu mode="host-model" check="partial">
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <clock offset="utc">
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <timer name="hpet" present="no"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </clock>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <on_poweroff>destroy</on_poweroff>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <on_reboot>restart</on_reboot>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <on_crash>destroy</on_crash>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <disk type="file" device="disk">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target dev="vda" bus="virtio"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <disk type="file" device="cdrom">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <driver name="qemu" type="raw" cache="none"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk.config"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target dev="sda" bus="sata"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <readonly/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="1" port="0x10"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="2" port="0x11"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="3" port="0x12"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="4" port="0x13"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="5" port="0x14"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="6" port="0x15"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="7" port="0x16"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="8" port="0x17"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="9" port="0x18"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="10" port="0x19"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="11" port="0x1a"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="12" port="0x1b"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="13" port="0x1c"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="14" port="0x1d"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="15" port="0x1e"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="16" port="0x1f"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="17" port="0x20"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="18" port="0x21"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="19" port="0x22"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="20" port="0x23"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="21" port="0x24"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="22" port="0x25"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="23" port="0x26"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="24" port="0x27"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="25" port="0x28"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-pci-bridge"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="sata" index="0">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <interface type="ethernet"><mac address="fa:16:3e:71:1a:d2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap986fc122-b6"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </interface><serial type="pty">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/console.log" append="off"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target type="isa-serial" port="0">
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <model name="isa-serial"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       </target>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </serial>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <console type="pty">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/console.log" append="off"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target type="serial" port="0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </console>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <input type="tablet" bus="usb">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="usb" bus="0" port="1"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </input>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <input type="mouse" bus="ps2"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <listen type="address" address="::"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </graphics>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <video>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model type="virtio" heads="1" primary="yes"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </video>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <stats period="10"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </memballoon>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <rng model="virtio">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <backend model="random">/dev/urandom</backend>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]: </domain>
Oct 08 16:33:43 compute-0 nova_compute[117413]:  _remove_cpu_shared_set_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:250
Oct 08 16:33:43 compute-0 nova_compute[117413]: 2025-10-08 16:33:43.372 2 DEBUG nova.virt.libvirt.migration [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] _update_pci_xml output xml=<domain type="kvm">
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <name>instance-00000015</name>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <uuid>214aa907-3edf-42c3-ac24-36294893f9df</uuid>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <metadata>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:package version="32.1.0-0.20251008114656.23cad1d.el10"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:name>tempest-TestExecuteVmWorkloadBalanceStrategy-server-1386125514</nova:name>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:creationTime>2025-10-08 16:32:49</nova:creationTime>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:flavor name="m1.nano" id="43cd5d45-bd07-4889-a671-dd23291090c1">
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:memory>128</nova:memory>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:disk>1</nova:disk>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:swap>0</nova:swap>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:vcpus>1</nova:vcpus>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:extraSpecs>
Oct 08 16:33:43 compute-0 nova_compute[117413]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         </nova:extraSpecs>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       </nova:flavor>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:image uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43">
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:minDisk>1</nova:minDisk>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:minRam>0</nova:minRam>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:properties>
Oct 08 16:33:43 compute-0 nova_compute[117413]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         </nova:properties>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       </nova:image>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:owner>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:user uuid="bdb98a9f1bf24f428912b8cdbbba458e">tempest-TestExecuteVmWorkloadBalanceStrategy-630971338-project-admin</nova:user>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:project uuid="ddb23534363a4dee8a87d68059ccced6">tempest-TestExecuteVmWorkloadBalanceStrategy-630971338</nova:project>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       </nova:owner>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:root type="image" uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <nova:ports>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <nova:port uuid="986fc122-b6de-499e-83b7-12ae4247a345">
Oct 08 16:33:43 compute-0 nova_compute[117413]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:         </nova:port>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       </nova:ports>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </nova:instance>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </metadata>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <memory unit="KiB">131072</memory>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <currentMemory unit="KiB">131072</currentMemory>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <vcpu placement="static">1</vcpu>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <resource>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <partition>/machine</partition>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </resource>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <sysinfo type="smbios">
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <system>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <entry name="manufacturer">RDO</entry>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <entry name="product">OpenStack Compute</entry>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <entry name="version">32.1.0-0.20251008114656.23cad1d.el10</entry>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <entry name="serial">214aa907-3edf-42c3-ac24-36294893f9df</entry>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <entry name="uuid">214aa907-3edf-42c3-ac24-36294893f9df</entry>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <entry name="family">Virtual Machine</entry>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </system>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </sysinfo>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <os>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <type arch="x86_64" machine="pc-q35-rhel9.6.0">hvm</type>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <boot dev="hd"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <smbios mode="sysinfo"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </os>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <features>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <acpi/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <apic/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <vmcoreinfo state="on"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </features>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <cpu mode="host-model" check="partial">
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <topology sockets="1" dies="1" clusters="1" cores="1" threads="1"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <clock offset="utc">
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <timer name="hpet" present="no"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </clock>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <on_poweroff>destroy</on_poweroff>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <on_reboot>restart</on_reboot>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <on_crash>destroy</on_crash>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <disk type="file" device="disk">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target dev="vda" bus="virtio"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x03" slot="0x00" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <disk type="file" device="cdrom">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <driver name="qemu" type="raw" cache="none"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/disk.config"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target dev="sda" bus="sata"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <readonly/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="drive" controller="0" bus="0" target="0" unit="0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="0" model="pcie-root"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="1" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="1" port="0x10"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x0" multifunction="on"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="2" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="2" port="0x11"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x1"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="3" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="3" port="0x12"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x2"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="4" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="4" port="0x13"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x3"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="5" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="5" port="0x14"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x4"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="6" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="6" port="0x15"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x5"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="7" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="7" port="0x16"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x6"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="8" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="8" port="0x17"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x02" function="0x7"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="9" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="9" port="0x18"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" multifunction="on"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="10" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="10" port="0x19"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x1"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="11" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="11" port="0x1a"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x2"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="12" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="12" port="0x1b"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x3"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="13" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="13" port="0x1c"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x4"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="14" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="14" port="0x1d"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x5"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="15" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="15" port="0x1e"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x6"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="16" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="16" port="0x1f"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x7"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="17" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="17" port="0x20"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0" multifunction="on"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="18" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="18" port="0x21"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x1"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="19" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="19" port="0x22"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x2"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="20" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="20" port="0x23"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x3"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="21" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="21" port="0x24"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x4"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="22" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="22" port="0x25"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x5"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="23" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="23" port="0x26"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x6"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="24" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="24" port="0x27"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x7"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="25" model="pcie-root-port">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-root-port"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target chassis="25" port="0x28"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x05" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="pci" index="26" model="pcie-to-pci-bridge">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model name="pcie-pci-bridge"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="usb" index="0" model="piix3-uhci">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x1a" slot="0x01" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <controller type="sata" index="0">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x1f" function="0x2"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </controller>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <interface type="ethernet"><mac address="fa:16:3e:71:1a:d2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap986fc122-b6"/><address type="pci" domain="0x0000" bus="0x02" slot="0x00" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </interface><serial type="pty">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/console.log" append="off"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target type="isa-serial" port="0">
Oct 08 16:33:43 compute-0 nova_compute[117413]:         <model name="isa-serial"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       </target>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </serial>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <console type="pty">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df/console.log" append="off"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <target type="serial" port="0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </console>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <input type="tablet" bus="usb">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="usb" bus="0" port="1"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </input>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <input type="mouse" bus="ps2"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <graphics type="vnc" port="-1" autoport="yes" listen="::">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <listen type="address" address="::"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </graphics>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <video>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <model type="virtio" heads="1" primary="yes"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x00" slot="0x01" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </video>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <stats period="10"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </memballoon>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     <rng model="virtio">
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <backend model="random">/dev/urandom</backend>
Oct 08 16:33:43 compute-0 nova_compute[117413]:       <address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:33:43 compute-0 nova_compute[117413]:   <seclabel type="dynamic" model="selinux" relabel="yes"/>
Oct 08 16:33:43 compute-0 nova_compute[117413]: </domain>
Oct 08 16:33:43 compute-0 nova_compute[117413]:  _update_pci_dev_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:166
Oct 08 16:33:43 compute-0 nova_compute[117413]: 2025-10-08 16:33:43.372 2 DEBUG nova.virt.libvirt.driver [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] About to invoke the migrate API _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11175
Oct 08 16:33:43 compute-0 nova_compute[117413]: 2025-10-08 16:33:43.791 2 DEBUG oslo_concurrency.lockutils [req-9eb8c3ee-3b82-44de-8494-9d5d88372afc req-fade9c57-4c9b-4a5b-b811-76531f32b083 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-214aa907-3edf-42c3-ac24-36294893f9df" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:33:43 compute-0 nova_compute[117413]: 2025-10-08 16:33:43.864 2 DEBUG nova.virt.libvirt.migration [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Current None elapsed 1 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.12/site-packages/nova/virt/libvirt/migration.py:658
Oct 08 16:33:43 compute-0 nova_compute[117413]: 2025-10-08 16:33:43.864 2 INFO nova.virt.libvirt.migration [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Increasing downtime to 50 ms after 0 sec elapsed time
Oct 08 16:33:44 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:44.757 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:33:44 compute-0 nova_compute[117413]: 2025-10-08 16:33:44.885 2 INFO nova.virt.libvirt.driver [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Migration running for 1 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Oct 08 16:33:45 compute-0 kernel: tap986fc122-b6 (unregistering): left promiscuous mode
Oct 08 16:33:45 compute-0 NetworkManager[1034]: <info>  [1759941225.3522] device (tap986fc122-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:33:45 compute-0 ovn_controller[19768]: 2025-10-08T16:33:45Z|00177|binding|INFO|Releasing lport 986fc122-b6de-499e-83b7-12ae4247a345 from this chassis (sb_readonly=0)
Oct 08 16:33:45 compute-0 ovn_controller[19768]: 2025-10-08T16:33:45Z|00178|binding|INFO|Setting lport 986fc122-b6de-499e-83b7-12ae4247a345 down in Southbound
Oct 08 16:33:45 compute-0 ovn_controller[19768]: 2025-10-08T16:33:45Z|00179|binding|INFO|Removing iface tap986fc122-b6 ovn-installed in OVS
Oct 08 16:33:45 compute-0 nova_compute[117413]: 2025-10-08 16:33:45.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:45 compute-0 nova_compute[117413]: 2025-10-08 16:33:45.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:45.369 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:1a:d2 10.100.0.9'], port_security=['fa:16:3e:71:1a:d2 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '71d4f943-a908-4940-aa7e-dbc90ffb7e42'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '214aa907-3edf-42c3-ac24-36294893f9df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5952b311-868c-4aad-8037-80ac85c56954', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddb23534363a4dee8a87d68059ccced6', 'neutron:revision_number': '10', 'neutron:security_group_ids': '59d0e94f-bb7e-4b50-a2ba-011426feae4d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5b730dc6-0991-46d6-9b8c-1ba4ba734d3f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=986fc122-b6de-499e-83b7-12ae4247a345) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:33:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:45.370 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 986fc122-b6de-499e-83b7-12ae4247a345 in datapath 5952b311-868c-4aad-8037-80ac85c56954 unbound from our chassis
Oct 08 16:33:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:45.371 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5952b311-868c-4aad-8037-80ac85c56954, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:33:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:45.373 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[9346d667-7115-4a19-abb6-10640889bbc1]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:33:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:45.374 28633 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5952b311-868c-4aad-8037-80ac85c56954 namespace which is not needed anymore
Oct 08 16:33:45 compute-0 nova_compute[117413]: 2025-10-08 16:33:45.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:45 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000015.scope: Deactivated successfully.
Oct 08 16:33:45 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000015.scope: Consumed 14.763s CPU time.
Oct 08 16:33:45 compute-0 systemd-machined[77548]: Machine qemu-16-instance-00000015 terminated.
Oct 08 16:33:45 compute-0 nova_compute[117413]: 2025-10-08 16:33:45.514 2 DEBUG nova.compute.manager [req-c5d8af68-b82a-481f-ab90-07c77b17603e req-d40b854c-bd66-4143-ae68-7cde9e48defc c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Received event network-vif-unplugged-986fc122-b6de-499e-83b7-12ae4247a345 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:33:45 compute-0 nova_compute[117413]: 2025-10-08 16:33:45.516 2 DEBUG oslo_concurrency.lockutils [req-c5d8af68-b82a-481f-ab90-07c77b17603e req-d40b854c-bd66-4143-ae68-7cde9e48defc c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "214aa907-3edf-42c3-ac24-36294893f9df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:33:45 compute-0 nova_compute[117413]: 2025-10-08 16:33:45.516 2 DEBUG oslo_concurrency.lockutils [req-c5d8af68-b82a-481f-ab90-07c77b17603e req-d40b854c-bd66-4143-ae68-7cde9e48defc c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:33:45 compute-0 nova_compute[117413]: 2025-10-08 16:33:45.517 2 DEBUG oslo_concurrency.lockutils [req-c5d8af68-b82a-481f-ab90-07c77b17603e req-d40b854c-bd66-4143-ae68-7cde9e48defc c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:33:45 compute-0 nova_compute[117413]: 2025-10-08 16:33:45.517 2 DEBUG nova.compute.manager [req-c5d8af68-b82a-481f-ab90-07c77b17603e req-d40b854c-bd66-4143-ae68-7cde9e48defc c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] No waiting events found dispatching network-vif-unplugged-986fc122-b6de-499e-83b7-12ae4247a345 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:33:45 compute-0 nova_compute[117413]: 2025-10-08 16:33:45.517 2 DEBUG nova.compute.manager [req-c5d8af68-b82a-481f-ab90-07c77b17603e req-d40b854c-bd66-4143-ae68-7cde9e48defc c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Received event network-vif-unplugged-986fc122-b6de-499e-83b7-12ae4247a345 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:33:45 compute-0 neutron-haproxy-ovnmeta-5952b311-868c-4aad-8037-80ac85c56954[149151]: [NOTICE]   (149155) : haproxy version is 3.0.5-8e879a5
Oct 08 16:33:45 compute-0 neutron-haproxy-ovnmeta-5952b311-868c-4aad-8037-80ac85c56954[149151]: [NOTICE]   (149155) : path to executable is /usr/sbin/haproxy
Oct 08 16:33:45 compute-0 neutron-haproxy-ovnmeta-5952b311-868c-4aad-8037-80ac85c56954[149151]: [WARNING]  (149155) : Exiting Master process...
Oct 08 16:33:45 compute-0 podman[149482]: 2025-10-08 16:33:45.533295468 +0000 UTC m=+0.042545402 container kill 8f20e5d2e634224ebf429d7e1be1a76b408600b5333d8f6dcfac86dc9c733b05 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-5952b311-868c-4aad-8037-80ac85c56954, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 08 16:33:45 compute-0 neutron-haproxy-ovnmeta-5952b311-868c-4aad-8037-80ac85c56954[149151]: [ALERT]    (149155) : Current worker (149157) exited with code 143 (Terminated)
Oct 08 16:33:45 compute-0 neutron-haproxy-ovnmeta-5952b311-868c-4aad-8037-80ac85c56954[149151]: [WARNING]  (149155) : All workers exited. Exiting... (0)
Oct 08 16:33:45 compute-0 systemd[1]: libpod-8f20e5d2e634224ebf429d7e1be1a76b408600b5333d8f6dcfac86dc9c733b05.scope: Deactivated successfully.
Oct 08 16:33:45 compute-0 nova_compute[117413]: 2025-10-08 16:33:45.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:45 compute-0 podman[149499]: 2025-10-08 16:33:45.577175108 +0000 UTC m=+0.024895986 container died 8f20e5d2e634224ebf429d7e1be1a76b408600b5333d8f6dcfac86dc9c733b05 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-5952b311-868c-4aad-8037-80ac85c56954, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007)
Oct 08 16:33:45 compute-0 nova_compute[117413]: 2025-10-08 16:33:45.596 2 DEBUG nova.virt.libvirt.guest [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.12/site-packages/nova/virt/libvirt/guest.py:687
Oct 08 16:33:45 compute-0 nova_compute[117413]: 2025-10-08 16:33:45.598 2 INFO nova.virt.libvirt.driver [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Migration operation has completed
Oct 08 16:33:45 compute-0 nova_compute[117413]: 2025-10-08 16:33:45.598 2 INFO nova.compute.manager [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] _post_live_migration() is started..
Oct 08 16:33:45 compute-0 nova_compute[117413]: 2025-10-08 16:33:45.600 2 DEBUG nova.virt.libvirt.driver [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Migrate API has completed _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11182
Oct 08 16:33:45 compute-0 nova_compute[117413]: 2025-10-08 16:33:45.600 2 DEBUG nova.virt.libvirt.driver [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Migration operation thread has finished _live_migration_operation /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11230
Oct 08 16:33:45 compute-0 nova_compute[117413]: 2025-10-08 16:33:45.601 2 DEBUG nova.virt.libvirt.driver [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Migration operation thread notification thread_finished /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11533
Oct 08 16:33:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8f20e5d2e634224ebf429d7e1be1a76b408600b5333d8f6dcfac86dc9c733b05-userdata-shm.mount: Deactivated successfully.
Oct 08 16:33:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-63093c5c18c69ccaa1b113641a3fc2c00328ab4f012557a6ad470c2cde2f16b8-merged.mount: Deactivated successfully.
Oct 08 16:33:45 compute-0 nova_compute[117413]: 2025-10-08 16:33:45.615 2 WARNING neutronclient.v2_0.client [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:33:45 compute-0 nova_compute[117413]: 2025-10-08 16:33:45.616 2 WARNING neutronclient.v2_0.client [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:33:45 compute-0 podman[149499]: 2025-10-08 16:33:45.640637989 +0000 UTC m=+0.088358877 container cleanup 8f20e5d2e634224ebf429d7e1be1a76b408600b5333d8f6dcfac86dc9c733b05 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-5952b311-868c-4aad-8037-80ac85c56954, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest)
Oct 08 16:33:45 compute-0 systemd[1]: libpod-conmon-8f20e5d2e634224ebf429d7e1be1a76b408600b5333d8f6dcfac86dc9c733b05.scope: Deactivated successfully.
Oct 08 16:33:45 compute-0 podman[149503]: 2025-10-08 16:33:45.664622447 +0000 UTC m=+0.103150781 container remove 8f20e5d2e634224ebf429d7e1be1a76b408600b5333d8f6dcfac86dc9c733b05 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-5952b311-868c-4aad-8037-80ac85c56954, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007)
Oct 08 16:33:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:45.672 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[67becb6a-3fae-4438-87b1-d40c71accdb9]: (4, ("Wed Oct  8 04:33:45 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-5952b311-868c-4aad-8037-80ac85c56954 (8f20e5d2e634224ebf429d7e1be1a76b408600b5333d8f6dcfac86dc9c733b05)\n8f20e5d2e634224ebf429d7e1be1a76b408600b5333d8f6dcfac86dc9c733b05\nWed Oct  8 04:33:45 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5952b311-868c-4aad-8037-80ac85c56954 (8f20e5d2e634224ebf429d7e1be1a76b408600b5333d8f6dcfac86dc9c733b05)\n8f20e5d2e634224ebf429d7e1be1a76b408600b5333d8f6dcfac86dc9c733b05\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:33:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:45.674 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[35e2c859-2c61-46ff-9229-b6c6895b1a33]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:33:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:45.675 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5952b311-868c-4aad-8037-80ac85c56954.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5952b311-868c-4aad-8037-80ac85c56954.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:33:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:45.676 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[86bde00f-1bd2-42a3-8aa1-10922a654e86]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:33:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:45.676 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5952b311-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:33:45 compute-0 nova_compute[117413]: 2025-10-08 16:33:45.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:45 compute-0 kernel: tap5952b311-80: left promiscuous mode
Oct 08 16:33:45 compute-0 nova_compute[117413]: 2025-10-08 16:33:45.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:45.696 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[9d559154-a075-4109-908e-7cfcc86bc966]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:33:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:45.727 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[0644735e-3862-4836-a881-fb8212f15dba]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:33:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:45.729 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[45af1872-d0a8-4869-9679-bd7e9064a269]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:33:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:45.751 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[a348ab8e-bbd5-4611-8f1f-2673906c0758]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 246286, 'reachable_time': 43407, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 149549, 'error': None, 'target': 'ovnmeta-5952b311-868c-4aad-8037-80ac85c56954', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:33:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:45.753 28777 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5952b311-868c-4aad-8037-80ac85c56954 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 08 16:33:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:33:45.753 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[6c7eca07-f233-471d-8677-9cd6fdff98bc]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:33:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d5952b311\x2d868c\x2d4aad\x2d8037\x2d80ac85c56954.mount: Deactivated successfully.
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.526 2 DEBUG nova.network.neutron [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Activated binding for port 986fc122-b6de-499e-83b7-12ae4247a345 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.12/site-packages/nova/network/neutron.py:3241
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.527 2 DEBUG nova.compute.manager [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "986fc122-b6de-499e-83b7-12ae4247a345", "address": "fa:16:3e:71:1a:d2", "network": {"id": "5952b311-868c-4aad-8037-80ac85c56954", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-839506186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9cc61fc52354bf197dc66c86673dfe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap986fc122-b6", "ovs_interfaceid": "986fc122-b6de-499e-83b7-12ae4247a345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10059
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.527 2 DEBUG nova.virt.libvirt.vif [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-08T16:32:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteVmWorkloadBalanceStrategy-server-1386125514',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutevmworkloadbalancestrategy-server-1386125514',id=21,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:32:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ddb23534363a4dee8a87d68059ccced6',ramdisk_id='',reservation_id='r-kfwyy0pv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteVmWorkloadBalanceStrategy-630971338',owner_user_name='tempest-TestExecuteVmWorkloadBalanceStrategy-630971338-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:33:23Z,user_data=None,user_id='bdb98a9f1bf24f428912b8cdbbba458e',uuid=214aa907-3edf-42c3-ac24-36294893f9df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "986fc122-b6de-499e-83b7-12ae4247a345", "address": "fa:16:3e:71:1a:d2", "network": {"id": "5952b311-868c-4aad-8037-80ac85c56954", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-839506186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9cc61fc52354bf197dc66c86673dfe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap986fc122-b6", "ovs_interfaceid": "986fc122-b6de-499e-83b7-12ae4247a345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.528 2 DEBUG nova.network.os_vif_util [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converting VIF {"id": "986fc122-b6de-499e-83b7-12ae4247a345", "address": "fa:16:3e:71:1a:d2", "network": {"id": "5952b311-868c-4aad-8037-80ac85c56954", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-839506186-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9cc61fc52354bf197dc66c86673dfe2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap986fc122-b6", "ovs_interfaceid": "986fc122-b6de-499e-83b7-12ae4247a345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.528 2 DEBUG nova.network.os_vif_util [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:1a:d2,bridge_name='br-int',has_traffic_filtering=True,id=986fc122-b6de-499e-83b7-12ae4247a345,network=Network(5952b311-868c-4aad-8037-80ac85c56954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap986fc122-b6') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.529 2 DEBUG os_vif [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:1a:d2,bridge_name='br-int',has_traffic_filtering=True,id=986fc122-b6de-499e-83b7-12ae4247a345,network=Network(5952b311-868c-4aad-8037-80ac85c56954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap986fc122-b6') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.530 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap986fc122-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.534 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=2b5fe258-fd35-4fe8-8fbf-2d8c2df6e3c8) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.537 2 INFO os_vif [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:1a:d2,bridge_name='br-int',has_traffic_filtering=True,id=986fc122-b6de-499e-83b7-12ae4247a345,network=Network(5952b311-868c-4aad-8037-80ac85c56954),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap986fc122-b6')
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.537 2 DEBUG oslo_concurrency.lockutils [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.537 2 DEBUG oslo_concurrency.lockutils [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.538 2 DEBUG oslo_concurrency.lockutils [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.538 2 DEBUG nova.compute.manager [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:10082
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.538 2 INFO nova.virt.libvirt.driver [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Deleting instance files /var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df_del
Oct 08 16:33:46 compute-0 nova_compute[117413]: 2025-10-08 16:33:46.539 2 INFO nova.virt.libvirt.driver [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Deletion of /var/lib/nova/instances/214aa907-3edf-42c3-ac24-36294893f9df_del complete
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.592 2 DEBUG nova.compute.manager [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Received event network-vif-plugged-986fc122-b6de-499e-83b7-12ae4247a345 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.592 2 DEBUG oslo_concurrency.lockutils [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "214aa907-3edf-42c3-ac24-36294893f9df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.593 2 DEBUG oslo_concurrency.lockutils [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.593 2 DEBUG oslo_concurrency.lockutils [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.593 2 DEBUG nova.compute.manager [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] No waiting events found dispatching network-vif-plugged-986fc122-b6de-499e-83b7-12ae4247a345 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.593 2 WARNING nova.compute.manager [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Received unexpected event network-vif-plugged-986fc122-b6de-499e-83b7-12ae4247a345 for instance with vm_state active and task_state migrating.
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.594 2 DEBUG nova.compute.manager [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Received event network-vif-unplugged-986fc122-b6de-499e-83b7-12ae4247a345 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.594 2 DEBUG oslo_concurrency.lockutils [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "214aa907-3edf-42c3-ac24-36294893f9df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.594 2 DEBUG oslo_concurrency.lockutils [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.594 2 DEBUG oslo_concurrency.lockutils [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.595 2 DEBUG nova.compute.manager [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] No waiting events found dispatching network-vif-unplugged-986fc122-b6de-499e-83b7-12ae4247a345 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.595 2 DEBUG nova.compute.manager [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Received event network-vif-unplugged-986fc122-b6de-499e-83b7-12ae4247a345 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.595 2 DEBUG nova.compute.manager [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Received event network-vif-unplugged-986fc122-b6de-499e-83b7-12ae4247a345 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.596 2 DEBUG oslo_concurrency.lockutils [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "214aa907-3edf-42c3-ac24-36294893f9df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.596 2 DEBUG oslo_concurrency.lockutils [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.596 2 DEBUG oslo_concurrency.lockutils [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.596 2 DEBUG nova.compute.manager [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] No waiting events found dispatching network-vif-unplugged-986fc122-b6de-499e-83b7-12ae4247a345 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.597 2 DEBUG nova.compute.manager [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Received event network-vif-unplugged-986fc122-b6de-499e-83b7-12ae4247a345 for instance with task_state migrating. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.597 2 DEBUG nova.compute.manager [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Received event network-vif-plugged-986fc122-b6de-499e-83b7-12ae4247a345 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.597 2 DEBUG oslo_concurrency.lockutils [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "214aa907-3edf-42c3-ac24-36294893f9df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.597 2 DEBUG oslo_concurrency.lockutils [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.597 2 DEBUG oslo_concurrency.lockutils [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.598 2 DEBUG nova.compute.manager [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] No waiting events found dispatching network-vif-plugged-986fc122-b6de-499e-83b7-12ae4247a345 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.598 2 WARNING nova.compute.manager [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Received unexpected event network-vif-plugged-986fc122-b6de-499e-83b7-12ae4247a345 for instance with vm_state active and task_state migrating.
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.598 2 DEBUG nova.compute.manager [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Received event network-vif-plugged-986fc122-b6de-499e-83b7-12ae4247a345 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.598 2 DEBUG oslo_concurrency.lockutils [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "214aa907-3edf-42c3-ac24-36294893f9df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.598 2 DEBUG oslo_concurrency.lockutils [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.598 2 DEBUG oslo_concurrency.lockutils [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.599 2 DEBUG nova.compute.manager [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] No waiting events found dispatching network-vif-plugged-986fc122-b6de-499e-83b7-12ae4247a345 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:33:47 compute-0 nova_compute[117413]: 2025-10-08 16:33:47.599 2 WARNING nova.compute.manager [req-92de70cd-ffa5-47fc-8d06-0faa8fd032f4 req-bacd50e7-242d-4851-a0f0-55ced60f9779 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Received unexpected event network-vif-plugged-986fc122-b6de-499e-83b7-12ae4247a345 for instance with vm_state active and task_state migrating.
Oct 08 16:33:51 compute-0 nova_compute[117413]: 2025-10-08 16:33:51.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:51 compute-0 nova_compute[117413]: 2025-10-08 16:33:51.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:54 compute-0 systemd[1]: Starting dnf makecache...
Oct 08 16:33:54 compute-0 podman[149551]: 2025-10-08 16:33:54.469087032 +0000 UTC m=+0.069240299 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_id=multipathd)
Oct 08 16:33:54 compute-0 dnf[149552]: Metadata cache refreshed recently.
Oct 08 16:33:54 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 08 16:33:54 compute-0 systemd[1]: Finished dnf makecache.
Oct 08 16:33:56 compute-0 nova_compute[117413]: 2025-10-08 16:33:56.073 2 DEBUG oslo_concurrency.lockutils [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "214aa907-3edf-42c3-ac24-36294893f9df-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:33:56 compute-0 nova_compute[117413]: 2025-10-08 16:33:56.073 2 DEBUG oslo_concurrency.lockutils [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:33:56 compute-0 nova_compute[117413]: 2025-10-08 16:33:56.073 2 DEBUG oslo_concurrency.lockutils [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "214aa907-3edf-42c3-ac24-36294893f9df-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:33:56 compute-0 nova_compute[117413]: 2025-10-08 16:33:56.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:56 compute-0 nova_compute[117413]: 2025-10-08 16:33:56.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:33:56 compute-0 nova_compute[117413]: 2025-10-08 16:33:56.588 2 DEBUG oslo_concurrency.lockutils [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:33:56 compute-0 nova_compute[117413]: 2025-10-08 16:33:56.589 2 DEBUG oslo_concurrency.lockutils [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:33:56 compute-0 nova_compute[117413]: 2025-10-08 16:33:56.589 2 DEBUG oslo_concurrency.lockutils [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:33:56 compute-0 nova_compute[117413]: 2025-10-08 16:33:56.589 2 DEBUG nova.compute.resource_tracker [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:33:56 compute-0 nova_compute[117413]: 2025-10-08 16:33:56.758 2 WARNING nova.virt.libvirt.driver [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:33:56 compute-0 nova_compute[117413]: 2025-10-08 16:33:56.760 2 DEBUG oslo_concurrency.processutils [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:33:56 compute-0 nova_compute[117413]: 2025-10-08 16:33:56.786 2 DEBUG oslo_concurrency.processutils [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.026s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:33:56 compute-0 nova_compute[117413]: 2025-10-08 16:33:56.787 2 DEBUG nova.compute.resource_tracker [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6158MB free_disk=73.25055694580078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:33:56 compute-0 nova_compute[117413]: 2025-10-08 16:33:56.787 2 DEBUG oslo_concurrency.lockutils [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:33:56 compute-0 nova_compute[117413]: 2025-10-08 16:33:56.787 2 DEBUG oslo_concurrency.lockutils [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:33:57 compute-0 nova_compute[117413]: 2025-10-08 16:33:57.807 2 DEBUG nova.compute.resource_tracker [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Migration for instance 214aa907-3edf-42c3-ac24-36294893f9df refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 08 16:33:58 compute-0 nova_compute[117413]: 2025-10-08 16:33:58.318 2 DEBUG nova.compute.resource_tracker [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Oct 08 16:33:58 compute-0 nova_compute[117413]: 2025-10-08 16:33:58.344 2 DEBUG nova.compute.resource_tracker [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Migration b741462d-fd63-46f8-afe4-bbf36492e407 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1745
Oct 08 16:33:58 compute-0 nova_compute[117413]: 2025-10-08 16:33:58.345 2 DEBUG nova.compute.resource_tracker [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:33:58 compute-0 nova_compute[117413]: 2025-10-08 16:33:58.345 2 DEBUG nova.compute.resource_tracker [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:33:56 up 42 min,  0 user,  load average: 0.08, 0.16, 0.22\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:33:58 compute-0 nova_compute[117413]: 2025-10-08 16:33:58.390 2 DEBUG nova.compute.provider_tree [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:33:58 compute-0 nova_compute[117413]: 2025-10-08 16:33:58.899 2 DEBUG nova.scheduler.client.report [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:33:59 compute-0 nova_compute[117413]: 2025-10-08 16:33:59.411 2 DEBUG nova.compute.resource_tracker [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:33:59 compute-0 nova_compute[117413]: 2025-10-08 16:33:59.411 2 DEBUG oslo_concurrency.lockutils [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.624s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:33:59 compute-0 nova_compute[117413]: 2025-10-08 16:33:59.429 2 INFO nova.compute.manager [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Oct 08 16:33:59 compute-0 podman[149575]: 2025-10-08 16:33:59.498301684 +0000 UTC m=+0.094932655 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, build-date=2025-08-20T13:12:41, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=)
Oct 08 16:33:59 compute-0 podman[127881]: time="2025-10-08T16:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:33:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:33:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3027 "" "Go-http-client/1.1"
Oct 08 16:34:00 compute-0 nova_compute[117413]: 2025-10-08 16:34:00.497 2 INFO nova.scheduler.client.report [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Deleted allocation for migration b741462d-fd63-46f8-afe4-bbf36492e407
Oct 08 16:34:00 compute-0 nova_compute[117413]: 2025-10-08 16:34:00.498 2 DEBUG nova.virt.libvirt.driver [None req-798f21dc-dc72-4281-94b7-f986c65ab48d ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 214aa907-3edf-42c3-ac24-36294893f9df] Live migration monitoring is all done _live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11566
Oct 08 16:34:01 compute-0 openstack_network_exporter[130039]: ERROR   16:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:34:01 compute-0 openstack_network_exporter[130039]: ERROR   16:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:34:01 compute-0 openstack_network_exporter[130039]: ERROR   16:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:34:01 compute-0 openstack_network_exporter[130039]: ERROR   16:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:34:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:34:01 compute-0 openstack_network_exporter[130039]: ERROR   16:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:34:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:34:01 compute-0 nova_compute[117413]: 2025-10-08 16:34:01.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:01 compute-0 nova_compute[117413]: 2025-10-08 16:34:01.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:05 compute-0 podman[149597]: 2025-10-08 16:34:05.4885897 +0000 UTC m=+0.081795719 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, container_name=iscsid)
Oct 08 16:34:06 compute-0 nova_compute[117413]: 2025-10-08 16:34:06.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:06 compute-0 nova_compute[117413]: 2025-10-08 16:34:06.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:08 compute-0 nova_compute[117413]: 2025-10-08 16:34:08.359 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:34:08 compute-0 podman[149617]: 2025-10-08 16:34:08.472019035 +0000 UTC m=+0.071627086 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 08 16:34:09 compute-0 nova_compute[117413]: 2025-10-08 16:34:09.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:34:09 compute-0 nova_compute[117413]: 2025-10-08 16:34:09.363 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:34:10 compute-0 nova_compute[117413]: 2025-10-08 16:34:10.603 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:34:11 compute-0 nova_compute[117413]: 2025-10-08 16:34:11.365 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:34:11 compute-0 nova_compute[117413]: 2025-10-08 16:34:11.365 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:34:11 compute-0 nova_compute[117413]: 2025-10-08 16:34:11.365 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:34:11 compute-0 nova_compute[117413]: 2025-10-08 16:34:11.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:11 compute-0 nova_compute[117413]: 2025-10-08 16:34:11.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:11 compute-0 nova_compute[117413]: 2025-10-08 16:34:11.883 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:34:11 compute-0 nova_compute[117413]: 2025-10-08 16:34:11.884 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:34:11 compute-0 nova_compute[117413]: 2025-10-08 16:34:11.884 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:34:11 compute-0 nova_compute[117413]: 2025-10-08 16:34:11.885 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:34:12 compute-0 nova_compute[117413]: 2025-10-08 16:34:12.050 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:34:12 compute-0 nova_compute[117413]: 2025-10-08 16:34:12.052 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:34:12 compute-0 nova_compute[117413]: 2025-10-08 16:34:12.082 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.031s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:34:12 compute-0 nova_compute[117413]: 2025-10-08 16:34:12.084 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6162MB free_disk=73.25055694580078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:34:12 compute-0 nova_compute[117413]: 2025-10-08 16:34:12.084 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:34:12 compute-0 nova_compute[117413]: 2025-10-08 16:34:12.085 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:34:13 compute-0 nova_compute[117413]: 2025-10-08 16:34:13.141 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:34:13 compute-0 nova_compute[117413]: 2025-10-08 16:34:13.142 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:34:12 up 42 min,  0 user,  load average: 0.06, 0.15, 0.21\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:34:13 compute-0 nova_compute[117413]: 2025-10-08 16:34:13.175 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:34:13 compute-0 nova_compute[117413]: 2025-10-08 16:34:13.432 2 DEBUG nova.compute.manager [None req-3c598556-bde3-43f1-8548-4760ba502087 b67951c3098d4bf994e39f4ac55e142e b2eb43725f1e4dbfa51aeb475eac607e - - default default] Removing trait COMPUTE_STATUS_DISABLED from compute node resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 in placement. update_compute_provider_status /usr/lib/python3.12/site-packages/nova/compute/manager.py:631
Oct 08 16:34:13 compute-0 nova_compute[117413]: 2025-10-08 16:34:13.482 2 DEBUG nova.compute.provider_tree [None req-3c598556-bde3-43f1-8548-4760ba502087 b67951c3098d4bf994e39f4ac55e142e b2eb43725f1e4dbfa51aeb475eac607e - - default default] Updating resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 generation from 24 to 27 during operation: update_traits _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Oct 08 16:34:13 compute-0 podman[149639]: 2025-10-08 16:34:13.491022225 +0000 UTC m=+0.092548357 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:34:13 compute-0 podman[149640]: 2025-10-08 16:34:13.518955037 +0000 UTC m=+0.122433455 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Oct 08 16:34:13 compute-0 nova_compute[117413]: 2025-10-08 16:34:13.689 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:34:13 compute-0 nova_compute[117413]: 2025-10-08 16:34:13.744 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 generation from 27 to 28 during operation: update_traits _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Oct 08 16:34:14 compute-0 nova_compute[117413]: 2025-10-08 16:34:14.253 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:34:14 compute-0 nova_compute[117413]: 2025-10-08 16:34:14.254 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.169s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:34:15 compute-0 nova_compute[117413]: 2025-10-08 16:34:15.251 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:34:15 compute-0 nova_compute[117413]: 2025-10-08 16:34:15.252 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:34:15 compute-0 nova_compute[117413]: 2025-10-08 16:34:15.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:34:15 compute-0 nova_compute[117413]: 2025-10-08 16:34:15.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:16 compute-0 nova_compute[117413]: 2025-10-08 16:34:16.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:16 compute-0 nova_compute[117413]: 2025-10-08 16:34:16.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:21 compute-0 nova_compute[117413]: 2025-10-08 16:34:21.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:21 compute-0 nova_compute[117413]: 2025-10-08 16:34:21.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:24 compute-0 nova_compute[117413]: 2025-10-08 16:34:24.358 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:34:25 compute-0 podman[149687]: 2025-10-08 16:34:25.456166172 +0000 UTC m=+0.058734647 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 08 16:34:26 compute-0 nova_compute[117413]: 2025-10-08 16:34:26.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:34:26 compute-0 nova_compute[117413]: 2025-10-08 16:34:26.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Oct 08 16:34:26 compute-0 nova_compute[117413]: 2025-10-08 16:34:26.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:26 compute-0 nova_compute[117413]: 2025-10-08 16:34:26.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:29 compute-0 podman[127881]: time="2025-10-08T16:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:34:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:34:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3027 "" "Go-http-client/1.1"
Oct 08 16:34:29 compute-0 nova_compute[117413]: 2025-10-08 16:34:29.867 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:34:29 compute-0 nova_compute[117413]: 2025-10-08 16:34:29.868 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Oct 08 16:34:30 compute-0 nova_compute[117413]: 2025-10-08 16:34:30.374 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Oct 08 16:34:30 compute-0 nova_compute[117413]: 2025-10-08 16:34:30.374 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:34:30 compute-0 podman[149707]: 2025-10-08 16:34:30.495726753 +0000 UTC m=+0.079284807 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_id=edpm, version=9.6, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41)
Oct 08 16:34:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:34:30.521 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:54:5f 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8fa179725d8340b1be8823a581335be9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3784f5f5-20ff-445d-bccb-63841b0639a2, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=736ca924-b754-456e-b784-63a1a480c78a) old=Port_Binding(mac=['fa:16:3e:25:54:5f'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8fa179725d8340b1be8823a581335be9', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:34:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:34:30.523 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 736ca924-b754-456e-b784-63a1a480c78a in datapath 4c3c839c-22dd-4557-90e7-00c4261e25fa updated
Oct 08 16:34:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:34:30.525 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c3c839c-22dd-4557-90e7-00c4261e25fa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:34:30 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:34:30.525 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e17d1700-f748-4d06-9535-587cdf2790ce]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:34:31 compute-0 openstack_network_exporter[130039]: ERROR   16:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:34:31 compute-0 openstack_network_exporter[130039]: ERROR   16:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:34:31 compute-0 openstack_network_exporter[130039]: ERROR   16:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:34:31 compute-0 openstack_network_exporter[130039]: ERROR   16:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:34:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:34:31 compute-0 openstack_network_exporter[130039]: ERROR   16:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:34:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:34:31 compute-0 nova_compute[117413]: 2025-10-08 16:34:31.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:31 compute-0 nova_compute[117413]: 2025-10-08 16:34:31.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:36 compute-0 podman[149729]: 2025-10-08 16:34:36.456027478 +0000 UTC m=+0.058216412 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct 08 16:34:36 compute-0 nova_compute[117413]: 2025-10-08 16:34:36.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:36 compute-0 nova_compute[117413]: 2025-10-08 16:34:36.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:34:37.019 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:57:f6 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-bbd53546-a966-43c9-9797-29c4716d2b48', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bbd53546-a966-43c9-9797-29c4716d2b48', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '096bcdb2ee9d4587b808e167326cbd88', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=52690745-345c-4412-b61d-e0c5fa506a78, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e9c151a6-4d28-46c5-bab9-beebd1a123e2) old=Port_Binding(mac=['fa:16:3e:2c:57:f6'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-bbd53546-a966-43c9-9797-29c4716d2b48', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bbd53546-a966-43c9-9797-29c4716d2b48', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '096bcdb2ee9d4587b808e167326cbd88', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:34:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:34:37.020 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e9c151a6-4d28-46c5-bab9-beebd1a123e2 in datapath bbd53546-a966-43c9-9797-29c4716d2b48 updated
Oct 08 16:34:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:34:37.021 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bbd53546-a966-43c9-9797-29c4716d2b48, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:34:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:34:37.022 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[94e08e86-d48b-4858-94be-c327136438bb]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:34:39 compute-0 podman[149749]: 2025-10-08 16:34:39.440855165 +0000 UTC m=+0.049686627 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 08 16:34:40 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:34:40.503 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:34:40 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:34:40.504 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:34:40 compute-0 nova_compute[117413]: 2025-10-08 16:34:40.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:41 compute-0 nova_compute[117413]: 2025-10-08 16:34:41.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:41 compute-0 nova_compute[117413]: 2025-10-08 16:34:41.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:34:41.921 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:34:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:34:41.921 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:34:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:34:41.921 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:34:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:34:42.506 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:34:44 compute-0 podman[149771]: 2025-10-08 16:34:44.446317994 +0000 UTC m=+0.058542392 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 08 16:34:44 compute-0 podman[149772]: 2025-10-08 16:34:44.513118411 +0000 UTC m=+0.120312684 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 08 16:34:46 compute-0 nova_compute[117413]: 2025-10-08 16:34:46.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:46 compute-0 nova_compute[117413]: 2025-10-08 16:34:46.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:51 compute-0 nova_compute[117413]: 2025-10-08 16:34:51.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:51 compute-0 nova_compute[117413]: 2025-10-08 16:34:51.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:52 compute-0 ovn_controller[19768]: 2025-10-08T16:34:52Z|00180|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct 08 16:34:56 compute-0 podman[149822]: 2025-10-08 16:34:56.498060358 +0000 UTC m=+0.097891132 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 08 16:34:56 compute-0 nova_compute[117413]: 2025-10-08 16:34:56.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:56 compute-0 nova_compute[117413]: 2025-10-08 16:34:56.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:34:59 compute-0 nova_compute[117413]: 2025-10-08 16:34:59.108 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:34:59 compute-0 podman[127881]: time="2025-10-08T16:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:34:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:34:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3022 "" "Go-http-client/1.1"
Oct 08 16:35:01 compute-0 openstack_network_exporter[130039]: ERROR   16:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:35:01 compute-0 openstack_network_exporter[130039]: ERROR   16:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:35:01 compute-0 openstack_network_exporter[130039]: ERROR   16:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:35:01 compute-0 openstack_network_exporter[130039]: ERROR   16:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:35:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:35:01 compute-0 openstack_network_exporter[130039]: ERROR   16:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:35:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:35:01 compute-0 podman[149842]: 2025-10-08 16:35:01.480504727 +0000 UTC m=+0.078366910 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, version=9.6, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, com.redhat.component=ubi9-minimal-container, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 08 16:35:01 compute-0 nova_compute[117413]: 2025-10-08 16:35:01.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:01 compute-0 nova_compute[117413]: 2025-10-08 16:35:01.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:06 compute-0 nova_compute[117413]: 2025-10-08 16:35:06.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:06 compute-0 nova_compute[117413]: 2025-10-08 16:35:06.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:07 compute-0 podman[149865]: 2025-10-08 16:35:07.472481203 +0000 UTC m=+0.083698464 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 08 16:35:08 compute-0 nova_compute[117413]: 2025-10-08 16:35:08.867 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:35:10 compute-0 nova_compute[117413]: 2025-10-08 16:35:10.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:35:10 compute-0 nova_compute[117413]: 2025-10-08 16:35:10.363 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:35:10 compute-0 podman[149886]: 2025-10-08 16:35:10.445822402 +0000 UTC m=+0.051275418 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 08 16:35:11 compute-0 nova_compute[117413]: 2025-10-08 16:35:11.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:35:11 compute-0 nova_compute[117413]: 2025-10-08 16:35:11.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:35:11 compute-0 nova_compute[117413]: 2025-10-08 16:35:11.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:11 compute-0 nova_compute[117413]: 2025-10-08 16:35:11.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:11 compute-0 nova_compute[117413]: 2025-10-08 16:35:11.873 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:35:11 compute-0 nova_compute[117413]: 2025-10-08 16:35:11.874 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:35:11 compute-0 nova_compute[117413]: 2025-10-08 16:35:11.874 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:35:11 compute-0 nova_compute[117413]: 2025-10-08 16:35:11.874 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:35:11 compute-0 nova_compute[117413]: 2025-10-08 16:35:11.992 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:35:11 compute-0 nova_compute[117413]: 2025-10-08 16:35:11.993 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:35:12 compute-0 nova_compute[117413]: 2025-10-08 16:35:12.011 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.018s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:35:12 compute-0 nova_compute[117413]: 2025-10-08 16:35:12.012 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6193MB free_disk=73.25055694580078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:35:12 compute-0 nova_compute[117413]: 2025-10-08 16:35:12.012 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:35:12 compute-0 nova_compute[117413]: 2025-10-08 16:35:12.012 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:35:13 compute-0 nova_compute[117413]: 2025-10-08 16:35:13.090 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:35:13 compute-0 nova_compute[117413]: 2025-10-08 16:35:13.091 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:35:12 up 43 min,  0 user,  load average: 0.02, 0.12, 0.19\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:35:13 compute-0 nova_compute[117413]: 2025-10-08 16:35:13.137 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing inventories for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Oct 08 16:35:13 compute-0 nova_compute[117413]: 2025-10-08 16:35:13.194 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating ProviderTree inventory for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Oct 08 16:35:13 compute-0 nova_compute[117413]: 2025-10-08 16:35:13.195 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating inventory in ProviderTree for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 08 16:35:13 compute-0 nova_compute[117413]: 2025-10-08 16:35:13.206 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing aggregate associations for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Oct 08 16:35:13 compute-0 nova_compute[117413]: 2025-10-08 16:35:13.222 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing trait associations for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8, traits: HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIRTIO_PACKED,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI2,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_SB16,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_ARCH_X86_64,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_SOUND_MODEL_ES1370,HW_ARCH_X86_64,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_VIRTIO_FS,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_TIS,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,HW_CPU_X86_MMX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_IGB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOUND_MODEL_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_SOUND_MODEL_AC97,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ADDRESS_SPACE_EMULATED,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STATUS_DISABLED,COMPUTE_SOUND_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_CRB,HW_CPU_X86_SSE42,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_STORAGE_BUS_IDE _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Oct 08 16:35:13 compute-0 nova_compute[117413]: 2025-10-08 16:35:13.245 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:35:13 compute-0 nova_compute[117413]: 2025-10-08 16:35:13.753 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:35:13 compute-0 nova_compute[117413]: 2025-10-08 16:35:13.811 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 generation from 28 to 29 during operation: update_traits _update_generation /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:164
Oct 08 16:35:14 compute-0 nova_compute[117413]: 2025-10-08 16:35:14.321 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:35:14 compute-0 nova_compute[117413]: 2025-10-08 16:35:14.321 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.309s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:35:15 compute-0 podman[149906]: 2025-10-08 16:35:15.47321875 +0000 UTC m=+0.071625418 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:35:15 compute-0 podman[149907]: 2025-10-08 16:35:15.515452005 +0000 UTC m=+0.107535230 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4)
Oct 08 16:35:16 compute-0 nova_compute[117413]: 2025-10-08 16:35:16.321 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:35:16 compute-0 nova_compute[117413]: 2025-10-08 16:35:16.322 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:35:16 compute-0 nova_compute[117413]: 2025-10-08 16:35:16.322 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:35:16 compute-0 nova_compute[117413]: 2025-10-08 16:35:16.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:16 compute-0 nova_compute[117413]: 2025-10-08 16:35:16.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:17 compute-0 nova_compute[117413]: 2025-10-08 16:35:17.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:35:21 compute-0 nova_compute[117413]: 2025-10-08 16:35:21.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:21 compute-0 nova_compute[117413]: 2025-10-08 16:35:21.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:26 compute-0 nova_compute[117413]: 2025-10-08 16:35:26.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:26 compute-0 nova_compute[117413]: 2025-10-08 16:35:26.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:26 compute-0 podman[149956]: 2025-10-08 16:35:26.674798827 +0000 UTC m=+0.103241016 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Oct 08 16:35:29 compute-0 podman[127881]: time="2025-10-08T16:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:35:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:35:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3030 "" "Go-http-client/1.1"
Oct 08 16:35:31 compute-0 openstack_network_exporter[130039]: ERROR   16:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:35:31 compute-0 openstack_network_exporter[130039]: ERROR   16:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:35:31 compute-0 openstack_network_exporter[130039]: ERROR   16:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:35:31 compute-0 openstack_network_exporter[130039]: ERROR   16:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:35:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:35:31 compute-0 openstack_network_exporter[130039]: ERROR   16:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:35:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:35:31 compute-0 nova_compute[117413]: 2025-10-08 16:35:31.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:31 compute-0 nova_compute[117413]: 2025-10-08 16:35:31.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:32 compute-0 podman[149976]: 2025-10-08 16:35:32.450138778 +0000 UTC m=+0.060500025 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7)
Oct 08 16:35:34 compute-0 nova_compute[117413]: 2025-10-08 16:35:34.587 2 DEBUG nova.virt.libvirt.driver [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Creating tmpfile /var/lib/nova/instances/tmp34p26_cv to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10944
Oct 08 16:35:34 compute-0 nova_compute[117413]: 2025-10-08 16:35:34.589 2 WARNING neutronclient.v2_0.client [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:35:34 compute-0 nova_compute[117413]: 2025-10-08 16:35:34.593 2 DEBUG nova.compute.manager [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp34p26_cv',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.12/site-packages/nova/compute/manager.py:9086
Oct 08 16:35:36 compute-0 nova_compute[117413]: 2025-10-08 16:35:36.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:36 compute-0 nova_compute[117413]: 2025-10-08 16:35:36.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:37 compute-0 nova_compute[117413]: 2025-10-08 16:35:37.164 2 WARNING neutronclient.v2_0.client [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:35:38 compute-0 podman[149999]: 2025-10-08 16:35:38.480336571 +0000 UTC m=+0.080230088 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 16:35:41 compute-0 podman[150019]: 2025-10-08 16:35:41.458590924 +0000 UTC m=+0.060250088 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 08 16:35:41 compute-0 nova_compute[117413]: 2025-10-08 16:35:41.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:41 compute-0 nova_compute[117413]: 2025-10-08 16:35:41.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:41.922 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:35:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:41.923 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:35:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:41.923 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:35:42 compute-0 nova_compute[117413]: 2025-10-08 16:35:42.732 2 DEBUG nova.compute.manager [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp34p26_cv',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='c1379c09-3709-4e86-b4cc-f98d39bbec5e',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9311
Oct 08 16:35:45 compute-0 nova_compute[117413]: 2025-10-08 16:35:45.388 2 DEBUG oslo_concurrency.lockutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-c1379c09-3709-4e86-b4cc-f98d39bbec5e" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:35:45 compute-0 nova_compute[117413]: 2025-10-08 16:35:45.389 2 DEBUG oslo_concurrency.lockutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-c1379c09-3709-4e86-b4cc-f98d39bbec5e" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:35:45 compute-0 nova_compute[117413]: 2025-10-08 16:35:45.389 2 DEBUG nova.network.neutron [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:35:45 compute-0 nova_compute[117413]: 2025-10-08 16:35:45.926 2 WARNING neutronclient.v2_0.client [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:35:46 compute-0 nova_compute[117413]: 2025-10-08 16:35:46.451 2 WARNING neutronclient.v2_0.client [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:35:46 compute-0 podman[150040]: 2025-10-08 16:35:46.476735163 +0000 UTC m=+0.070673721 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:35:46 compute-0 podman[150041]: 2025-10-08 16:35:46.490562274 +0000 UTC m=+0.091472884 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 08 16:35:46 compute-0 nova_compute[117413]: 2025-10-08 16:35:46.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:46 compute-0 nova_compute[117413]: 2025-10-08 16:35:46.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:46 compute-0 nova_compute[117413]: 2025-10-08 16:35:46.613 2 DEBUG nova.network.neutron [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Updating instance_info_cache with network_info: [{"id": "4cede08a-205a-4371-ba5c-1d8aa1970064", "address": "fa:16:3e:c7:42:b5", "network": {"id": "4c3c839c-22dd-4557-90e7-00c4261e25fa", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-255323703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fa179725d8340b1be8823a581335be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cede08a-20", "ovs_interfaceid": "4cede08a-205a-4371-ba5c-1d8aa1970064", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:35:47 compute-0 nova_compute[117413]: 2025-10-08 16:35:47.503 2 DEBUG oslo_concurrency.lockutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-c1379c09-3709-4e86-b4cc-f98d39bbec5e" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:35:47 compute-0 nova_compute[117413]: 2025-10-08 16:35:47.518 2 DEBUG nova.virt.libvirt.driver [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp34p26_cv',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='c1379c09-3709-4e86-b4cc-f98d39bbec5e',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11737
Oct 08 16:35:47 compute-0 nova_compute[117413]: 2025-10-08 16:35:47.520 2 DEBUG nova.virt.libvirt.driver [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Creating instance directory: /var/lib/nova/instances/c1379c09-3709-4e86-b4cc-f98d39bbec5e pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11750
Oct 08 16:35:47 compute-0 nova_compute[117413]: 2025-10-08 16:35:47.521 2 DEBUG nova.virt.libvirt.driver [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Creating disk.info with the contents: {'/var/lib/nova/instances/c1379c09-3709-4e86-b4cc-f98d39bbec5e/disk': 'qcow2', '/var/lib/nova/instances/c1379c09-3709-4e86-b4cc-f98d39bbec5e/disk.config': 'raw'} pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11764
Oct 08 16:35:47 compute-0 nova_compute[117413]: 2025-10-08 16:35:47.521 2 DEBUG nova.virt.libvirt.driver [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11774
Oct 08 16:35:47 compute-0 nova_compute[117413]: 2025-10-08 16:35:47.522 2 DEBUG nova.objects.instance [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c1379c09-3709-4e86-b4cc-f98d39bbec5e obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.029 2 DEBUG oslo_utils.imageutils.format_inspector [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.036 2 DEBUG oslo_utils.imageutils.format_inspector [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.039 2 DEBUG oslo_concurrency.processutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.127 2 DEBUG oslo_concurrency.processutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.130 2 DEBUG oslo_concurrency.lockutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.131 2 DEBUG oslo_concurrency.lockutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.132 2 DEBUG oslo_utils.imageutils.format_inspector [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.140 2 DEBUG oslo_utils.imageutils.format_inspector [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.141 2 DEBUG oslo_concurrency.processutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.211 2 DEBUG oslo_concurrency.processutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.213 2 DEBUG oslo_concurrency.processutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/c1379c09-3709-4e86-b4cc-f98d39bbec5e/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.257 2 DEBUG oslo_concurrency.processutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/c1379c09-3709-4e86-b4cc-f98d39bbec5e/disk 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.259 2 DEBUG oslo_concurrency.lockutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.128s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.259 2 DEBUG oslo_concurrency.processutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.329 2 DEBUG oslo_concurrency.processutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.332 2 DEBUG nova.virt.disk.api [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Checking if we can resize image /var/lib/nova/instances/c1379c09-3709-4e86-b4cc-f98d39bbec5e/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.332 2 DEBUG oslo_concurrency.processutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c1379c09-3709-4e86-b4cc-f98d39bbec5e/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.420 2 DEBUG oslo_concurrency.processutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c1379c09-3709-4e86-b4cc-f98d39bbec5e/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.421 2 DEBUG nova.virt.disk.api [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Cannot resize image /var/lib/nova/instances/c1379c09-3709-4e86-b4cc-f98d39bbec5e/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.422 2 DEBUG nova.objects.instance [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'migration_context' on Instance uuid c1379c09-3709-4e86-b4cc-f98d39bbec5e obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.931 2 DEBUG nova.objects.base [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Object Instance<c1379c09-3709-4e86-b4cc-f98d39bbec5e> lazy-loaded attributes: trusted_certs,migration_context wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.932 2 DEBUG oslo_concurrency.processutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/c1379c09-3709-4e86-b4cc-f98d39bbec5e/disk.config 497664 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.980 2 DEBUG oslo_concurrency.processutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/c1379c09-3709-4e86-b4cc-f98d39bbec5e/disk.config 497664" returned: 0 in 0.047s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.982 2 DEBUG nova.virt.libvirt.driver [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11704
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.984 2 DEBUG nova.virt.libvirt.vif [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-08T16:34:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-1751847669',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-1751847669',id=22,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:35:00Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='096bcdb2ee9d4587b808e167326cbd88',ramdisk_id='',reservation_id='r-z2aispke',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-425573729',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-425573729-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:35:00Z,user_data=None,user_id='4a25bc3a607548039426bbcfe6b35524',uuid=c1379c09-3709-4e86-b4cc-f98d39bbec5e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4cede08a-205a-4371-ba5c-1d8aa1970064", "address": "fa:16:3e:c7:42:b5", "network": {"id": "4c3c839c-22dd-4557-90e7-00c4261e25fa", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-255323703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fa179725d8340b1be8823a581335be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap4cede08a-20", "ovs_interfaceid": "4cede08a-205a-4371-ba5c-1d8aa1970064", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.985 2 DEBUG nova.network.os_vif_util [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converting VIF {"id": "4cede08a-205a-4371-ba5c-1d8aa1970064", "address": "fa:16:3e:c7:42:b5", "network": {"id": "4c3c839c-22dd-4557-90e7-00c4261e25fa", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-255323703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fa179725d8340b1be8823a581335be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap4cede08a-20", "ovs_interfaceid": "4cede08a-205a-4371-ba5c-1d8aa1970064", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.986 2 DEBUG nova.network.os_vif_util [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:42:b5,bridge_name='br-int',has_traffic_filtering=True,id=4cede08a-205a-4371-ba5c-1d8aa1970064,network=Network(4c3c839c-22dd-4557-90e7-00c4261e25fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cede08a-20') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.987 2 DEBUG os_vif [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:42:b5,bridge_name='br-int',has_traffic_filtering=True,id=4cede08a-205a-4371-ba5c-1d8aa1970064,network=Network(4c3c839c-22dd-4557-90e7-00c4261e25fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cede08a-20') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.988 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.989 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.991 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'fba43825-dfff-5e90-be6e-578cb7ac8d7b', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.998 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4cede08a-20, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:35:48 compute-0 nova_compute[117413]: 2025-10-08 16:35:48.999 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap4cede08a-20, col_values=(('qos', UUID('162db5a7-c03c-4171-8fed-5f3899e49e36')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:35:49 compute-0 nova_compute[117413]: 2025-10-08 16:35:49.000 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap4cede08a-20, col_values=(('external_ids', {'iface-id': '4cede08a-205a-4371-ba5c-1d8aa1970064', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:42:b5', 'vm-uuid': 'c1379c09-3709-4e86-b4cc-f98d39bbec5e'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:35:49 compute-0 NetworkManager[1034]: <info>  [1759941349.0026] manager: (tap4cede08a-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Oct 08 16:35:49 compute-0 nova_compute[117413]: 2025-10-08 16:35:49.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:35:49 compute-0 nova_compute[117413]: 2025-10-08 16:35:49.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:49 compute-0 nova_compute[117413]: 2025-10-08 16:35:49.012 2 INFO os_vif [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:42:b5,bridge_name='br-int',has_traffic_filtering=True,id=4cede08a-205a-4371-ba5c-1d8aa1970064,network=Network(4c3c839c-22dd-4557-90e7-00c4261e25fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cede08a-20')
Oct 08 16:35:49 compute-0 nova_compute[117413]: 2025-10-08 16:35:49.013 2 DEBUG nova.virt.libvirt.driver [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11851
Oct 08 16:35:49 compute-0 nova_compute[117413]: 2025-10-08 16:35:49.013 2 DEBUG nova.compute.manager [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp34p26_cv',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='c1379c09-3709-4e86-b4cc-f98d39bbec5e',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9377
Oct 08 16:35:49 compute-0 nova_compute[117413]: 2025-10-08 16:35:49.014 2 WARNING neutronclient.v2_0.client [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:35:49 compute-0 nova_compute[117413]: 2025-10-08 16:35:49.188 2 WARNING neutronclient.v2_0.client [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:35:49 compute-0 nova_compute[117413]: 2025-10-08 16:35:49.896 2 DEBUG nova.network.neutron [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Port 4cede08a-205a-4371-ba5c-1d8aa1970064 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.12/site-packages/nova/network/neutron.py:356
Oct 08 16:35:49 compute-0 nova_compute[117413]: 2025-10-08 16:35:49.910 2 DEBUG nova.compute.manager [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp34p26_cv',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='c1379c09-3709-4e86-b4cc-f98d39bbec5e',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9443
Oct 08 16:35:51 compute-0 nova_compute[117413]: 2025-10-08 16:35:51.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:53 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 08 16:35:53 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 08 16:35:53 compute-0 kernel: tap4cede08a-20: entered promiscuous mode
Oct 08 16:35:53 compute-0 NetworkManager[1034]: <info>  [1759941353.4195] manager: (tap4cede08a-20): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Oct 08 16:35:53 compute-0 nova_compute[117413]: 2025-10-08 16:35:53.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:53 compute-0 ovn_controller[19768]: 2025-10-08T16:35:53Z|00181|binding|INFO|Claiming lport 4cede08a-205a-4371-ba5c-1d8aa1970064 for this additional chassis.
Oct 08 16:35:53 compute-0 ovn_controller[19768]: 2025-10-08T16:35:53Z|00182|binding|INFO|4cede08a-205a-4371-ba5c-1d8aa1970064: Claiming fa:16:3e:c7:42:b5 10.100.0.8
Oct 08 16:35:53 compute-0 nova_compute[117413]: 2025-10-08 16:35:53.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:53 compute-0 systemd-udevd[150137]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:35:53 compute-0 nova_compute[117413]: 2025-10-08 16:35:53.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:53 compute-0 ovn_controller[19768]: 2025-10-08T16:35:53Z|00183|binding|INFO|Setting lport 4cede08a-205a-4371-ba5c-1d8aa1970064 ovn-installed in OVS
Oct 08 16:35:53 compute-0 nova_compute[117413]: 2025-10-08 16:35:53.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:53 compute-0 NetworkManager[1034]: <info>  [1759941353.5010] device (tap4cede08a-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:35:53 compute-0 NetworkManager[1034]: <info>  [1759941353.5020] device (tap4cede08a-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:35:53 compute-0 systemd-machined[77548]: New machine qemu-17-instance-00000016.
Oct 08 16:35:53 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000016.
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.571 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:42:b5 10.100.0.8'], port_security=['fa:16:3e:c7:42:b5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c1379c09-3709-4e86-b4cc-f98d39bbec5e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '096bcdb2ee9d4587b808e167326cbd88', 'neutron:revision_number': '10', 'neutron:security_group_ids': '4e4677a0-1ad0-4540-9c8b-4dd14348f18d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3784f5f5-20ff-445d-bccb-63841b0639a2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=4cede08a-205a-4371-ba5c-1d8aa1970064) old=Port_Binding(additional_chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.572 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 4cede08a-205a-4371-ba5c-1d8aa1970064 in datapath 4c3c839c-22dd-4557-90e7-00c4261e25fa unbound from our chassis
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.573 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4c3c839c-22dd-4557-90e7-00c4261e25fa
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.584 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[cb093f90-340c-4241-a81c-f914e8a2bb2b]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.585 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4c3c839c-21 in ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.588 139805 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4c3c839c-20 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.588 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[c329a0dc-3c88-40e6-b386-e7163882579a]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.589 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[737c6fb7-88ed-4484-87b9-936e451d2507]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.603 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[6abfeca3-dc46-4fae-bebc-11788e902331]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.618 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[9aba6a22-9249-4e05-a9e1-9bb811d76a54]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.667 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[34b080f2-7cd3-4a75-8c75-b9508d6c129f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:35:53 compute-0 NetworkManager[1034]: <info>  [1759941353.6744] manager: (tap4c3c839c-20): new Veth device (/org/freedesktop/NetworkManager/Devices/72)
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.673 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[61bca677-b942-48d2-8e8e-abc1dcea8191]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.713 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[951c79b4-e22c-435f-8bf0-371e7db9fd57]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.716 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[cc1af5cb-b145-4dbf-88f6-a5b9559ad8f1]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:35:53 compute-0 NetworkManager[1034]: <info>  [1759941353.7440] device (tap4c3c839c-20): carrier: link connected
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.757 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[218d0faf-ca39-48b6-9bd2-e5cf9476deaf]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.780 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[692f45c2-520b-49d5-9c22-a226a6e37217]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c3c839c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:54:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 264243, 'reachable_time': 37791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 150173, 'error': None, 'target': 'ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.801 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[f6e64e22-167d-4f9d-8d2f-98bfacf7782d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe25:545f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 264243, 'tstamp': 264243}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 150174, 'error': None, 'target': 'ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.834 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[00b49e02-5dab-4132-8e1b-5ddf28420fe8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c3c839c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:54:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 264243, 'reachable_time': 37791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 150175, 'error': None, 'target': 'ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.875 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[ca164419-4023-4c7b-b2d2-e9e393c35adc]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.972 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[4aa52830-8780-455f-9b92-2b468fdea62c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.974 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c3c839c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.975 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.975 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c3c839c-20, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:35:53 compute-0 nova_compute[117413]: 2025-10-08 16:35:53.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:53 compute-0 kernel: tap4c3c839c-20: entered promiscuous mode
Oct 08 16:35:53 compute-0 NetworkManager[1034]: <info>  [1759941353.9791] manager: (tap4c3c839c-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Oct 08 16:35:53 compute-0 nova_compute[117413]: 2025-10-08 16:35:53.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:53 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:53.984 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4c3c839c-20, col_values=(('external_ids', {'iface-id': '736ca924-b754-456e-b784-63a1a480c78a'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:35:53 compute-0 nova_compute[117413]: 2025-10-08 16:35:53.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:53 compute-0 ovn_controller[19768]: 2025-10-08T16:35:53Z|00184|binding|INFO|Releasing lport 736ca924-b754-456e-b784-63a1a480c78a from this chassis (sb_readonly=0)
Oct 08 16:35:54 compute-0 nova_compute[117413]: 2025-10-08 16:35:54.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:54 compute-0 nova_compute[117413]: 2025-10-08 16:35:54.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:54.012 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[26b68531-0aec-46dd-ab05-296c140e09cd]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:54.014 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:54.014 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:54.014 28633 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 4c3c839c-22dd-4557-90e7-00c4261e25fa disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:54.014 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:54.015 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[3be0c229-e606-40d9-82d0-fc649d2e0ec5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:54.016 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:54.017 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f7fd68-5b3b-4353-843c-17cbadc2d49c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:54.019 28633 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]: global
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     log         /dev/log local0 debug
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     log-tag     haproxy-metadata-proxy-4c3c839c-22dd-4557-90e7-00c4261e25fa
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     user        root
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     group       root
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     maxconn     1024
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     pidfile     /var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     daemon
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]: defaults
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     log global
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     mode http
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     option httplog
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     option dontlognull
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     option http-server-close
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     option forwardfor
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     retries                 3
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     timeout http-request    30s
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     timeout connect         30s
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     timeout client          32s
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     timeout server          32s
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     timeout http-keep-alive 30s
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]: listen listener
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     bind 169.254.169.254:80
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     server metadata /var/lib/neutron/metadata_proxy
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:     http-request add-header X-OVN-Network-ID 4c3c839c-22dd-4557-90e7-00c4261e25fa
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 08 16:35:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:54.020 28633 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'env', 'PROCESS_TAG=haproxy-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4c3c839c-22dd-4557-90e7-00c4261e25fa.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 08 16:35:54 compute-0 podman[150214]: 2025-10-08 16:35:54.414968948 +0000 UTC m=+0.059279000 container create 71f172009edabcdcff8b22485d92f17d895040327f2e0203c0e6ef7981e46600 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007)
Oct 08 16:35:54 compute-0 systemd[1]: Started libpod-conmon-71f172009edabcdcff8b22485d92f17d895040327f2e0203c0e6ef7981e46600.scope.
Oct 08 16:35:54 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:35:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c381a76d46a314fba024138b185a0af863d4bf9a59d8412b80d516ac3f218548/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 08 16:35:54 compute-0 podman[150214]: 2025-10-08 16:35:54.38883707 +0000 UTC m=+0.033147122 image pull 1b705be0a2473f9551d4f3571c1e8fc1b0bd84e013684239de53078e70a4b6e3 38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 08 16:35:54 compute-0 podman[150214]: 2025-10-08 16:35:54.484400932 +0000 UTC m=+0.128710994 container init 71f172009edabcdcff8b22485d92f17d895040327f2e0203c0e6ef7981e46600 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 16:35:54 compute-0 podman[150214]: 2025-10-08 16:35:54.490230971 +0000 UTC m=+0.134541043 container start 71f172009edabcdcff8b22485d92f17d895040327f2e0203c0e6ef7981e46600 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest)
Oct 08 16:35:54 compute-0 neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa[150229]: [NOTICE]   (150233) : New worker (150235) forked
Oct 08 16:35:54 compute-0 neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa[150229]: [NOTICE]   (150233) : Loading success.
Oct 08 16:35:55 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:55.580 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:35:55 compute-0 nova_compute[117413]: 2025-10-08 16:35:55.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:55 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:55.583 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:35:56 compute-0 ovn_controller[19768]: 2025-10-08T16:35:56Z|00185|binding|INFO|Claiming lport 4cede08a-205a-4371-ba5c-1d8aa1970064 for this chassis.
Oct 08 16:35:56 compute-0 ovn_controller[19768]: 2025-10-08T16:35:56Z|00186|binding|INFO|4cede08a-205a-4371-ba5c-1d8aa1970064: Claiming fa:16:3e:c7:42:b5 10.100.0.8
Oct 08 16:35:56 compute-0 ovn_controller[19768]: 2025-10-08T16:35:56Z|00187|binding|INFO|Setting lport 4cede08a-205a-4371-ba5c-1d8aa1970064 up in Southbound
Oct 08 16:35:56 compute-0 nova_compute[117413]: 2025-10-08 16:35:56.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:57 compute-0 podman[150258]: 2025-10-08 16:35:57.472066098 +0000 UTC m=+0.075015207 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:35:57 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:35:57.585 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:35:57 compute-0 nova_compute[117413]: 2025-10-08 16:35:57.644 2 INFO nova.compute.manager [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Post operation of migration started
Oct 08 16:35:57 compute-0 nova_compute[117413]: 2025-10-08 16:35:57.645 2 WARNING neutronclient.v2_0.client [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:35:58 compute-0 nova_compute[117413]: 2025-10-08 16:35:58.186 2 WARNING neutronclient.v2_0.client [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:35:58 compute-0 nova_compute[117413]: 2025-10-08 16:35:58.187 2 WARNING neutronclient.v2_0.client [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:35:58 compute-0 nova_compute[117413]: 2025-10-08 16:35:58.279 2 DEBUG oslo_concurrency.lockutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-c1379c09-3709-4e86-b4cc-f98d39bbec5e" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:35:58 compute-0 nova_compute[117413]: 2025-10-08 16:35:58.280 2 DEBUG oslo_concurrency.lockutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-c1379c09-3709-4e86-b4cc-f98d39bbec5e" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:35:58 compute-0 nova_compute[117413]: 2025-10-08 16:35:58.280 2 DEBUG nova.network.neutron [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:35:58 compute-0 nova_compute[117413]: 2025-10-08 16:35:58.787 2 WARNING neutronclient.v2_0.client [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:35:59 compute-0 nova_compute[117413]: 2025-10-08 16:35:59.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:35:59 compute-0 nova_compute[117413]: 2025-10-08 16:35:59.702 2 WARNING neutronclient.v2_0.client [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:35:59 compute-0 podman[127881]: time="2025-10-08T16:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:35:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:35:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3490 "" "Go-http-client/1.1"
Oct 08 16:35:59 compute-0 nova_compute[117413]: 2025-10-08 16:35:59.912 2 DEBUG nova.network.neutron [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Updating instance_info_cache with network_info: [{"id": "4cede08a-205a-4371-ba5c-1d8aa1970064", "address": "fa:16:3e:c7:42:b5", "network": {"id": "4c3c839c-22dd-4557-90e7-00c4261e25fa", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-255323703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fa179725d8340b1be8823a581335be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cede08a-20", "ovs_interfaceid": "4cede08a-205a-4371-ba5c-1d8aa1970064", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:36:00 compute-0 nova_compute[117413]: 2025-10-08 16:36:00.421 2 DEBUG oslo_concurrency.lockutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-c1379c09-3709-4e86-b4cc-f98d39bbec5e" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:36:00 compute-0 nova_compute[117413]: 2025-10-08 16:36:00.947 2 DEBUG oslo_concurrency.lockutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:36:00 compute-0 nova_compute[117413]: 2025-10-08 16:36:00.948 2 DEBUG oslo_concurrency.lockutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:36:00 compute-0 nova_compute[117413]: 2025-10-08 16:36:00.948 2 DEBUG oslo_concurrency.lockutils [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:36:00 compute-0 nova_compute[117413]: 2025-10-08 16:36:00.955 2 INFO nova.virt.libvirt.driver [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 08 16:36:00 compute-0 virtqemud[117740]: Domain id=17 name='instance-00000016' uuid=c1379c09-3709-4e86-b4cc-f98d39bbec5e is tainted: custom-monitor
Oct 08 16:36:01 compute-0 openstack_network_exporter[130039]: ERROR   16:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:36:01 compute-0 openstack_network_exporter[130039]: ERROR   16:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:36:01 compute-0 openstack_network_exporter[130039]: ERROR   16:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:36:01 compute-0 openstack_network_exporter[130039]: ERROR   16:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:36:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:36:01 compute-0 openstack_network_exporter[130039]: ERROR   16:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:36:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:36:01 compute-0 nova_compute[117413]: 2025-10-08 16:36:01.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:01 compute-0 nova_compute[117413]: 2025-10-08 16:36:01.963 2 INFO nova.virt.libvirt.driver [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 08 16:36:02 compute-0 nova_compute[117413]: 2025-10-08 16:36:02.969 2 INFO nova.virt.libvirt.driver [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 08 16:36:02 compute-0 nova_compute[117413]: 2025-10-08 16:36:02.974 2 DEBUG nova.compute.manager [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:36:03 compute-0 podman[150279]: 2025-10-08 16:36:03.482246879 +0000 UTC m=+0.077008935 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container)
Oct 08 16:36:03 compute-0 nova_compute[117413]: 2025-10-08 16:36:03.484 2 DEBUG nova.objects.instance [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Oct 08 16:36:04 compute-0 nova_compute[117413]: 2025-10-08 16:36:04.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:04 compute-0 nova_compute[117413]: 2025-10-08 16:36:04.510 2 WARNING neutronclient.v2_0.client [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:36:05 compute-0 nova_compute[117413]: 2025-10-08 16:36:05.212 2 WARNING neutronclient.v2_0.client [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:36:05 compute-0 nova_compute[117413]: 2025-10-08 16:36:05.213 2 WARNING neutronclient.v2_0.client [None req-874a5dcd-e674-4659-9b75-6a09cec0090c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:36:06 compute-0 nova_compute[117413]: 2025-10-08 16:36:06.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:09 compute-0 nova_compute[117413]: 2025-10-08 16:36:09.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:09 compute-0 nova_compute[117413]: 2025-10-08 16:36:09.357 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:36:09 compute-0 podman[150301]: 2025-10-08 16:36:09.460890547 +0000 UTC m=+0.065964244 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Oct 08 16:36:11 compute-0 nova_compute[117413]: 2025-10-08 16:36:11.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:36:11 compute-0 nova_compute[117413]: 2025-10-08 16:36:11.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:11 compute-0 nova_compute[117413]: 2025-10-08 16:36:11.877 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:36:11 compute-0 nova_compute[117413]: 2025-10-08 16:36:11.878 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:36:11 compute-0 nova_compute[117413]: 2025-10-08 16:36:11.878 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:36:11 compute-0 nova_compute[117413]: 2025-10-08 16:36:11.878 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:36:11 compute-0 podman[150322]: 2025-10-08 16:36:11.996618106 +0000 UTC m=+0.067649234 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 08 16:36:12 compute-0 nova_compute[117413]: 2025-10-08 16:36:12.934 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c1379c09-3709-4e86-b4cc-f98d39bbec5e/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:36:13 compute-0 nova_compute[117413]: 2025-10-08 16:36:13.017 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c1379c09-3709-4e86-b4cc-f98d39bbec5e/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:36:13 compute-0 nova_compute[117413]: 2025-10-08 16:36:13.018 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c1379c09-3709-4e86-b4cc-f98d39bbec5e/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:36:13 compute-0 nova_compute[117413]: 2025-10-08 16:36:13.103 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c1379c09-3709-4e86-b4cc-f98d39bbec5e/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:36:13 compute-0 nova_compute[117413]: 2025-10-08 16:36:13.277 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:36:13 compute-0 nova_compute[117413]: 2025-10-08 16:36:13.278 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:36:13 compute-0 nova_compute[117413]: 2025-10-08 16:36:13.299 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.020s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:36:13 compute-0 nova_compute[117413]: 2025-10-08 16:36:13.299 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5977MB free_disk=73.22187423706055GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:36:13 compute-0 nova_compute[117413]: 2025-10-08 16:36:13.300 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:36:13 compute-0 nova_compute[117413]: 2025-10-08 16:36:13.300 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:36:14 compute-0 nova_compute[117413]: 2025-10-08 16:36:14.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:14 compute-0 nova_compute[117413]: 2025-10-08 16:36:14.855 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance c1379c09-3709-4e86-b4cc-f98d39bbec5e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:36:14 compute-0 nova_compute[117413]: 2025-10-08 16:36:14.855 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:36:14 compute-0 nova_compute[117413]: 2025-10-08 16:36:14.856 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:36:13 up 44 min,  0 user,  load average: 0.00, 0.09, 0.18\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_096bcdb2ee9d4587b808e167326cbd88': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:36:14 compute-0 nova_compute[117413]: 2025-10-08 16:36:14.896 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:36:15 compute-0 nova_compute[117413]: 2025-10-08 16:36:15.404 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:36:15 compute-0 nova_compute[117413]: 2025-10-08 16:36:15.920 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:36:15 compute-0 nova_compute[117413]: 2025-10-08 16:36:15.921 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.621s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:36:15 compute-0 nova_compute[117413]: 2025-10-08 16:36:15.973 2 DEBUG oslo_concurrency.lockutils [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Acquiring lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:36:15 compute-0 nova_compute[117413]: 2025-10-08 16:36:15.974 2 DEBUG oslo_concurrency.lockutils [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:36:15 compute-0 nova_compute[117413]: 2025-10-08 16:36:15.974 2 DEBUG oslo_concurrency.lockutils [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Acquiring lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:36:15 compute-0 nova_compute[117413]: 2025-10-08 16:36:15.974 2 DEBUG oslo_concurrency.lockutils [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:36:15 compute-0 nova_compute[117413]: 2025-10-08 16:36:15.975 2 DEBUG oslo_concurrency.lockutils [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:36:15 compute-0 nova_compute[117413]: 2025-10-08 16:36:15.989 2 INFO nova.compute.manager [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Terminating instance
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.511 2 DEBUG nova.compute.manager [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:36:16 compute-0 kernel: tap4cede08a-20 (unregistering): left promiscuous mode
Oct 08 16:36:16 compute-0 NetworkManager[1034]: <info>  [1759941376.5411] device (tap4cede08a-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:36:16 compute-0 ovn_controller[19768]: 2025-10-08T16:36:16Z|00188|binding|INFO|Releasing lport 4cede08a-205a-4371-ba5c-1d8aa1970064 from this chassis (sb_readonly=0)
Oct 08 16:36:16 compute-0 ovn_controller[19768]: 2025-10-08T16:36:16Z|00189|binding|INFO|Setting lport 4cede08a-205a-4371-ba5c-1d8aa1970064 down in Southbound
Oct 08 16:36:16 compute-0 ovn_controller[19768]: 2025-10-08T16:36:16Z|00190|binding|INFO|Removing iface tap4cede08a-20 ovn-installed in OVS
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.564 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:42:b5 10.100.0.8'], port_security=['fa:16:3e:c7:42:b5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c1379c09-3709-4e86-b4cc-f98d39bbec5e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '096bcdb2ee9d4587b808e167326cbd88', 'neutron:revision_number': '15', 'neutron:security_group_ids': '4e4677a0-1ad0-4540-9c8b-4dd14348f18d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3784f5f5-20ff-445d-bccb-63841b0639a2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=4cede08a-205a-4371-ba5c-1d8aa1970064) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.564 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 4cede08a-205a-4371-ba5c-1d8aa1970064 in datapath 4c3c839c-22dd-4557-90e7-00c4261e25fa unbound from our chassis
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.565 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c3c839c-22dd-4557-90e7-00c4261e25fa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.566 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e01b8278-9a50-4f44-ab27-2351a946452f]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.567 28633 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa namespace which is not needed anymore
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:16 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct 08 16:36:16 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000016.scope: Consumed 2.371s CPU time.
Oct 08 16:36:16 compute-0 systemd-machined[77548]: Machine qemu-17-instance-00000016 terminated.
Oct 08 16:36:16 compute-0 podman[150352]: 2025-10-08 16:36:16.646483063 +0000 UTC m=+0.069451946 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:36:16 compute-0 podman[150353]: 2025-10-08 16:36:16.678632955 +0000 UTC m=+0.093756310 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=ovn_controller)
Oct 08 16:36:16 compute-0 neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa[150229]: [NOTICE]   (150233) : haproxy version is 3.0.5-8e879a5
Oct 08 16:36:16 compute-0 podman[150416]: 2025-10-08 16:36:16.710321514 +0000 UTC m=+0.032158263 container kill 71f172009edabcdcff8b22485d92f17d895040327f2e0203c0e6ef7981e46600 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 08 16:36:16 compute-0 neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa[150229]: [NOTICE]   (150233) : path to executable is /usr/sbin/haproxy
Oct 08 16:36:16 compute-0 neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa[150229]: [WARNING]  (150233) : Exiting Master process...
Oct 08 16:36:16 compute-0 neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa[150229]: [ALERT]    (150233) : Current worker (150235) exited with code 143 (Terminated)
Oct 08 16:36:16 compute-0 neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa[150229]: [WARNING]  (150233) : All workers exited. Exiting... (0)
Oct 08 16:36:16 compute-0 systemd[1]: libpod-71f172009edabcdcff8b22485d92f17d895040327f2e0203c0e6ef7981e46600.scope: Deactivated successfully.
Oct 08 16:36:16 compute-0 kernel: tap4cede08a-20: entered promiscuous mode
Oct 08 16:36:16 compute-0 kernel: tap4cede08a-20 (unregistering): left promiscuous mode
Oct 08 16:36:16 compute-0 NetworkManager[1034]: <info>  [1759941376.7347] manager: (tap4cede08a-20): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Oct 08 16:36:16 compute-0 ovn_controller[19768]: 2025-10-08T16:36:16Z|00191|binding|INFO|Claiming lport 4cede08a-205a-4371-ba5c-1d8aa1970064 for this chassis.
Oct 08 16:36:16 compute-0 ovn_controller[19768]: 2025-10-08T16:36:16Z|00192|binding|INFO|4cede08a-205a-4371-ba5c-1d8aa1970064: Claiming fa:16:3e:c7:42:b5 10.100.0.8
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.745 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:42:b5 10.100.0.8'], port_security=['fa:16:3e:c7:42:b5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c1379c09-3709-4e86-b4cc-f98d39bbec5e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '096bcdb2ee9d4587b808e167326cbd88', 'neutron:revision_number': '15', 'neutron:security_group_ids': '4e4677a0-1ad0-4540-9c8b-4dd14348f18d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3784f5f5-20ff-445d-bccb-63841b0639a2, chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=4cede08a-205a-4371-ba5c-1d8aa1970064) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:16 compute-0 ovn_controller[19768]: 2025-10-08T16:36:16Z|00193|binding|INFO|Setting lport 4cede08a-205a-4371-ba5c-1d8aa1970064 ovn-installed in OVS
Oct 08 16:36:16 compute-0 ovn_controller[19768]: 2025-10-08T16:36:16Z|00194|binding|INFO|Setting lport 4cede08a-205a-4371-ba5c-1d8aa1970064 up in Southbound
Oct 08 16:36:16 compute-0 ovn_controller[19768]: 2025-10-08T16:36:16Z|00195|binding|INFO|Releasing lport 4cede08a-205a-4371-ba5c-1d8aa1970064 from this chassis (sb_readonly=1)
Oct 08 16:36:16 compute-0 ovn_controller[19768]: 2025-10-08T16:36:16Z|00196|if_status|INFO|Not setting lport 4cede08a-205a-4371-ba5c-1d8aa1970064 down as sb is readonly
Oct 08 16:36:16 compute-0 ovn_controller[19768]: 2025-10-08T16:36:16Z|00197|binding|INFO|Removing iface tap4cede08a-20 ovn-installed in OVS
Oct 08 16:36:16 compute-0 ovn_controller[19768]: 2025-10-08T16:36:16Z|00198|binding|INFO|Releasing lport 4cede08a-205a-4371-ba5c-1d8aa1970064 from this chassis (sb_readonly=0)
Oct 08 16:36:16 compute-0 ovn_controller[19768]: 2025-10-08T16:36:16Z|00199|binding|INFO|Setting lport 4cede08a-205a-4371-ba5c-1d8aa1970064 down in Southbound
Oct 08 16:36:16 compute-0 podman[150435]: 2025-10-08 16:36:16.769067448 +0000 UTC m=+0.032497273 container died 71f172009edabcdcff8b22485d92f17d895040327f2e0203c0e6ef7981e46600 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.771 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:42:b5 10.100.0.8'], port_security=['fa:16:3e:c7:42:b5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c1379c09-3709-4e86-b4cc-f98d39bbec5e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '096bcdb2ee9d4587b808e167326cbd88', 'neutron:revision_number': '17', 'neutron:security_group_ids': '4e4677a0-1ad0-4540-9c8b-4dd14348f18d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3784f5f5-20ff-445d-bccb-63841b0639a2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=4cede08a-205a-4371-ba5c-1d8aa1970064) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.785 2 DEBUG nova.compute.manager [req-0dbf4759-c4ce-41d5-9506-0e53464346e0 req-293752b7-4561-4421-9a85-cad21f8cfdcf c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Received event network-vif-unplugged-4cede08a-205a-4371-ba5c-1d8aa1970064 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.785 2 DEBUG oslo_concurrency.lockutils [req-0dbf4759-c4ce-41d5-9506-0e53464346e0 req-293752b7-4561-4421-9a85-cad21f8cfdcf c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.785 2 DEBUG oslo_concurrency.lockutils [req-0dbf4759-c4ce-41d5-9506-0e53464346e0 req-293752b7-4561-4421-9a85-cad21f8cfdcf c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.785 2 DEBUG oslo_concurrency.lockutils [req-0dbf4759-c4ce-41d5-9506-0e53464346e0 req-293752b7-4561-4421-9a85-cad21f8cfdcf c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.785 2 DEBUG nova.compute.manager [req-0dbf4759-c4ce-41d5-9506-0e53464346e0 req-293752b7-4561-4421-9a85-cad21f8cfdcf c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] No waiting events found dispatching network-vif-unplugged-4cede08a-205a-4371-ba5c-1d8aa1970064 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.786 2 DEBUG nova.compute.manager [req-0dbf4759-c4ce-41d5-9506-0e53464346e0 req-293752b7-4561-4421-9a85-cad21f8cfdcf c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Received event network-vif-unplugged-4cede08a-205a-4371-ba5c-1d8aa1970064 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.793 2 INFO nova.virt.libvirt.driver [-] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Instance destroyed successfully.
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.793 2 DEBUG nova.objects.instance [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Lazy-loading 'resources' on Instance uuid c1379c09-3709-4e86-b4cc-f98d39bbec5e obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:36:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-71f172009edabcdcff8b22485d92f17d895040327f2e0203c0e6ef7981e46600-userdata-shm.mount: Deactivated successfully.
Oct 08 16:36:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c381a76d46a314fba024138b185a0af863d4bf9a59d8412b80d516ac3f218548-merged.mount: Deactivated successfully.
Oct 08 16:36:16 compute-0 podman[150435]: 2025-10-08 16:36:16.8125546 +0000 UTC m=+0.075984415 container cleanup 71f172009edabcdcff8b22485d92f17d895040327f2e0203c0e6ef7981e46600 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:36:16 compute-0 systemd[1]: libpod-conmon-71f172009edabcdcff8b22485d92f17d895040327f2e0203c0e6ef7981e46600.scope: Deactivated successfully.
Oct 08 16:36:16 compute-0 podman[150438]: 2025-10-08 16:36:16.830962224 +0000 UTC m=+0.083373460 container remove 71f172009edabcdcff8b22485d92f17d895040327f2e0203c0e6ef7981e46600 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.839 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[8126477e-081e-49e4-80ca-a49421ad6ec3]: (4, ("Wed Oct  8 04:36:16 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa (71f172009edabcdcff8b22485d92f17d895040327f2e0203c0e6ef7981e46600)\n71f172009edabcdcff8b22485d92f17d895040327f2e0203c0e6ef7981e46600\nWed Oct  8 04:36:16 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa (71f172009edabcdcff8b22485d92f17d895040327f2e0203c0e6ef7981e46600)\n71f172009edabcdcff8b22485d92f17d895040327f2e0203c0e6ef7981e46600\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.841 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[c30d2026-f5c2-4bb7-934a-159ab765a18d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.842 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.842 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[21a896e8-5a5b-44f9-a301-636eebb138e8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.843 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c3c839c-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:36:16 compute-0 kernel: tap4c3c839c-20: left promiscuous mode
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.864 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[fd1d591f-3dff-4c31-a4ce-60e6868a6ebf]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.889 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[366909ac-e33f-4e0f-959e-99959ef75090]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.890 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[c35193fa-068c-454a-9e9d-c306cb7be1e0]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.907 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[269588f2-60ed-428c-bf0f-15c09dd3805e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 264234, 'reachable_time': 38422, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 150489, 'error': None, 'target': 'ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.909 28777 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.909 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[a93548f6-5ad7-4daf-b8f1-a88d6836044b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.910 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 4cede08a-205a-4371-ba5c-1d8aa1970064 in datapath 4c3c839c-22dd-4557-90e7-00c4261e25fa unbound from our chassis
Oct 08 16:36:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d4c3c839c\x2d22dd\x2d4557\x2d90e7\x2d00c4261e25fa.mount: Deactivated successfully.
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.911 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c3c839c-22dd-4557-90e7-00c4261e25fa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.911 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[c3be4398-ef90-411c-a41a-9d12ad8ed3ff]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.912 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 4cede08a-205a-4371-ba5c-1d8aa1970064 in datapath 4c3c839c-22dd-4557-90e7-00c4261e25fa unbound from our chassis
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.912 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c3c839c-22dd-4557-90e7-00c4261e25fa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:36:16 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:16.913 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[4f5cea30-2125-4566-88e5-f6e05208b386]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.922 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.922 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.923 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.923 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.923 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:36:16 compute-0 nova_compute[117413]: 2025-10-08 16:36:16.923 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:36:17 compute-0 nova_compute[117413]: 2025-10-08 16:36:17.300 2 DEBUG nova.virt.libvirt.vif [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,compute_id=2,config_drive='True',created_at=2025-10-08T16:34:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-1751847669',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-1751847669',id=22,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:35:00Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='096bcdb2ee9d4587b808e167326cbd88',ramdisk_id='',reservation_id='r-z2aispke',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',clean_attempts='1',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-425573729',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-425573729-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:36:04Z,user_data=None,user_id='4a25bc3a607548039426bbcfe6b35524',uuid=c1379c09-3709-4e86-b4cc-f98d39bbec5e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4cede08a-205a-4371-ba5c-1d8aa1970064", "address": "fa:16:3e:c7:42:b5", "network": {"id": "4c3c839c-22dd-4557-90e7-00c4261e25fa", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-255323703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fa179725d8340b1be8823a581335be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cede08a-20", "ovs_interfaceid": "4cede08a-205a-4371-ba5c-1d8aa1970064", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:36:17 compute-0 nova_compute[117413]: 2025-10-08 16:36:17.301 2 DEBUG nova.network.os_vif_util [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Converting VIF {"id": "4cede08a-205a-4371-ba5c-1d8aa1970064", "address": "fa:16:3e:c7:42:b5", "network": {"id": "4c3c839c-22dd-4557-90e7-00c4261e25fa", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-255323703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fa179725d8340b1be8823a581335be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4cede08a-20", "ovs_interfaceid": "4cede08a-205a-4371-ba5c-1d8aa1970064", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:36:17 compute-0 nova_compute[117413]: 2025-10-08 16:36:17.301 2 DEBUG nova.network.os_vif_util [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:42:b5,bridge_name='br-int',has_traffic_filtering=True,id=4cede08a-205a-4371-ba5c-1d8aa1970064,network=Network(4c3c839c-22dd-4557-90e7-00c4261e25fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cede08a-20') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:36:17 compute-0 nova_compute[117413]: 2025-10-08 16:36:17.302 2 DEBUG os_vif [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:42:b5,bridge_name='br-int',has_traffic_filtering=True,id=4cede08a-205a-4371-ba5c-1d8aa1970064,network=Network(4c3c839c-22dd-4557-90e7-00c4261e25fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cede08a-20') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:36:17 compute-0 nova_compute[117413]: 2025-10-08 16:36:17.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:17 compute-0 nova_compute[117413]: 2025-10-08 16:36:17.304 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4cede08a-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:36:17 compute-0 nova_compute[117413]: 2025-10-08 16:36:17.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:17 compute-0 nova_compute[117413]: 2025-10-08 16:36:17.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:17 compute-0 nova_compute[117413]: 2025-10-08 16:36:17.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:17 compute-0 nova_compute[117413]: 2025-10-08 16:36:17.308 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=162db5a7-c03c-4171-8fed-5f3899e49e36) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:36:17 compute-0 nova_compute[117413]: 2025-10-08 16:36:17.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:17 compute-0 nova_compute[117413]: 2025-10-08 16:36:17.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:17 compute-0 nova_compute[117413]: 2025-10-08 16:36:17.312 2 INFO os_vif [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:42:b5,bridge_name='br-int',has_traffic_filtering=True,id=4cede08a-205a-4371-ba5c-1d8aa1970064,network=Network(4c3c839c-22dd-4557-90e7-00c4261e25fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4cede08a-20')
Oct 08 16:36:17 compute-0 nova_compute[117413]: 2025-10-08 16:36:17.312 2 INFO nova.virt.libvirt.driver [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Deleting instance files /var/lib/nova/instances/c1379c09-3709-4e86-b4cc-f98d39bbec5e_del
Oct 08 16:36:17 compute-0 nova_compute[117413]: 2025-10-08 16:36:17.313 2 INFO nova.virt.libvirt.driver [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Deletion of /var/lib/nova/instances/c1379c09-3709-4e86-b4cc-f98d39bbec5e_del complete
Oct 08 16:36:17 compute-0 nova_compute[117413]: 2025-10-08 16:36:17.824 2 INFO nova.compute.manager [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Took 1.31 seconds to destroy the instance on the hypervisor.
Oct 08 16:36:17 compute-0 nova_compute[117413]: 2025-10-08 16:36:17.825 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:36:17 compute-0 nova_compute[117413]: 2025-10-08 16:36:17.825 2 DEBUG nova.compute.manager [-] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:36:17 compute-0 nova_compute[117413]: 2025-10-08 16:36:17.825 2 DEBUG nova.network.neutron [-] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:36:17 compute-0 nova_compute[117413]: 2025-10-08 16:36:17.826 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.181 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.846 2 DEBUG nova.compute.manager [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Received event network-vif-unplugged-4cede08a-205a-4371-ba5c-1d8aa1970064 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.846 2 DEBUG oslo_concurrency.lockutils [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.847 2 DEBUG oslo_concurrency.lockutils [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.847 2 DEBUG oslo_concurrency.lockutils [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.847 2 DEBUG nova.compute.manager [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] No waiting events found dispatching network-vif-unplugged-4cede08a-205a-4371-ba5c-1d8aa1970064 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.847 2 DEBUG nova.compute.manager [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Received event network-vif-unplugged-4cede08a-205a-4371-ba5c-1d8aa1970064 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.847 2 DEBUG nova.compute.manager [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Received event network-vif-plugged-4cede08a-205a-4371-ba5c-1d8aa1970064 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.848 2 DEBUG oslo_concurrency.lockutils [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.848 2 DEBUG oslo_concurrency.lockutils [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.848 2 DEBUG oslo_concurrency.lockutils [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.849 2 DEBUG nova.compute.manager [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] No waiting events found dispatching network-vif-plugged-4cede08a-205a-4371-ba5c-1d8aa1970064 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.849 2 WARNING nova.compute.manager [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Received unexpected event network-vif-plugged-4cede08a-205a-4371-ba5c-1d8aa1970064 for instance with vm_state active and task_state deleting.
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.849 2 DEBUG nova.compute.manager [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Received event network-vif-plugged-4cede08a-205a-4371-ba5c-1d8aa1970064 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.849 2 DEBUG oslo_concurrency.lockutils [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.850 2 DEBUG oslo_concurrency.lockutils [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.850 2 DEBUG oslo_concurrency.lockutils [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.850 2 DEBUG nova.compute.manager [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] No waiting events found dispatching network-vif-plugged-4cede08a-205a-4371-ba5c-1d8aa1970064 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.850 2 WARNING nova.compute.manager [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Received unexpected event network-vif-plugged-4cede08a-205a-4371-ba5c-1d8aa1970064 for instance with vm_state active and task_state deleting.
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.851 2 DEBUG nova.compute.manager [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Received event network-vif-unplugged-4cede08a-205a-4371-ba5c-1d8aa1970064 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.851 2 DEBUG oslo_concurrency.lockutils [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.851 2 DEBUG oslo_concurrency.lockutils [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.852 2 DEBUG oslo_concurrency.lockutils [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.852 2 DEBUG nova.compute.manager [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] No waiting events found dispatching network-vif-unplugged-4cede08a-205a-4371-ba5c-1d8aa1970064 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.852 2 DEBUG nova.compute.manager [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Received event network-vif-unplugged-4cede08a-205a-4371-ba5c-1d8aa1970064 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.852 2 DEBUG nova.compute.manager [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Received event network-vif-unplugged-4cede08a-205a-4371-ba5c-1d8aa1970064 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.853 2 DEBUG oslo_concurrency.lockutils [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.853 2 DEBUG oslo_concurrency.lockutils [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.853 2 DEBUG oslo_concurrency.lockutils [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.853 2 DEBUG nova.compute.manager [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] No waiting events found dispatching network-vif-unplugged-4cede08a-205a-4371-ba5c-1d8aa1970064 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.854 2 DEBUG nova.compute.manager [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Received event network-vif-unplugged-4cede08a-205a-4371-ba5c-1d8aa1970064 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.854 2 DEBUG nova.compute.manager [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Received event network-vif-deleted-4cede08a-205a-4371-ba5c-1d8aa1970064 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.854 2 INFO nova.compute.manager [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Neutron deleted interface 4cede08a-205a-4371-ba5c-1d8aa1970064; detaching it from the instance and deleting it from the info cache
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.855 2 DEBUG nova.network.neutron [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:36:18 compute-0 nova_compute[117413]: 2025-10-08 16:36:18.968 2 DEBUG nova.network.neutron [-] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:36:19 compute-0 nova_compute[117413]: 2025-10-08 16:36:19.363 2 DEBUG nova.compute.manager [req-667d3c50-97cb-4892-889b-9013557e2c1c req-594893c9-8270-485d-ab75-a58ab06a0030 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Detach interface failed, port_id=4cede08a-205a-4371-ba5c-1d8aa1970064, reason: Instance c1379c09-3709-4e86-b4cc-f98d39bbec5e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 08 16:36:19 compute-0 nova_compute[117413]: 2025-10-08 16:36:19.477 2 INFO nova.compute.manager [-] [instance: c1379c09-3709-4e86-b4cc-f98d39bbec5e] Took 1.65 seconds to deallocate network for instance.
Oct 08 16:36:19 compute-0 nova_compute[117413]: 2025-10-08 16:36:19.997 2 DEBUG oslo_concurrency.lockutils [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:36:19 compute-0 nova_compute[117413]: 2025-10-08 16:36:19.998 2 DEBUG oslo_concurrency.lockutils [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:36:20 compute-0 nova_compute[117413]: 2025-10-08 16:36:20.045 2 DEBUG nova.compute.provider_tree [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:36:20 compute-0 nova_compute[117413]: 2025-10-08 16:36:20.553 2 DEBUG nova.scheduler.client.report [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:36:21 compute-0 nova_compute[117413]: 2025-10-08 16:36:21.063 2 DEBUG oslo_concurrency.lockutils [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.065s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:36:21 compute-0 nova_compute[117413]: 2025-10-08 16:36:21.083 2 INFO nova.scheduler.client.report [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Deleted allocations for instance c1379c09-3709-4e86-b4cc-f98d39bbec5e
Oct 08 16:36:21 compute-0 nova_compute[117413]: 2025-10-08 16:36:21.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:22 compute-0 nova_compute[117413]: 2025-10-08 16:36:22.127 2 DEBUG oslo_concurrency.lockutils [None req-44c35cfe-46e8-4198-a2be-3015257e8f63 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Lock "c1379c09-3709-4e86-b4cc-f98d39bbec5e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.154s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:36:22 compute-0 nova_compute[117413]: 2025-10-08 16:36:22.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:26 compute-0 nova_compute[117413]: 2025-10-08 16:36:26.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:27 compute-0 nova_compute[117413]: 2025-10-08 16:36:27.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:27 compute-0 nova_compute[117413]: 2025-10-08 16:36:27.357 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:36:28 compute-0 podman[150490]: 2025-10-08 16:36:28.486347122 +0000 UTC m=+0.078223630 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 08 16:36:29 compute-0 podman[127881]: time="2025-10-08T16:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:36:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:36:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3033 "" "Go-http-client/1.1"
Oct 08 16:36:31 compute-0 openstack_network_exporter[130039]: ERROR   16:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:36:31 compute-0 openstack_network_exporter[130039]: ERROR   16:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:36:31 compute-0 openstack_network_exporter[130039]: ERROR   16:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:36:31 compute-0 openstack_network_exporter[130039]: ERROR   16:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:36:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:36:31 compute-0 openstack_network_exporter[130039]: ERROR   16:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:36:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:36:31 compute-0 nova_compute[117413]: 2025-10-08 16:36:31.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:32 compute-0 nova_compute[117413]: 2025-10-08 16:36:32.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:34 compute-0 podman[150511]: 2025-10-08 16:36:34.466983397 +0000 UTC m=+0.064748359 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, release=1755695350, architecture=x86_64)
Oct 08 16:36:36 compute-0 nova_compute[117413]: 2025-10-08 16:36:36.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:37 compute-0 nova_compute[117413]: 2025-10-08 16:36:37.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:40 compute-0 podman[150533]: 2025-10-08 16:36:40.49128604 +0000 UTC m=+0.079046444 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid)
Oct 08 16:36:41 compute-0 nova_compute[117413]: 2025-10-08 16:36:41.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:41.924 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:36:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:41.924 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:36:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:36:41.924 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:36:42 compute-0 nova_compute[117413]: 2025-10-08 16:36:42.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:42 compute-0 podman[150554]: 2025-10-08 16:36:42.452191815 +0000 UTC m=+0.057823788 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 08 16:36:46 compute-0 nova_compute[117413]: 2025-10-08 16:36:46.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:47 compute-0 nova_compute[117413]: 2025-10-08 16:36:47.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:47 compute-0 podman[150573]: 2025-10-08 16:36:47.487066039 +0000 UTC m=+0.086796218 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:36:47 compute-0 podman[150574]: 2025-10-08 16:36:47.526916075 +0000 UTC m=+0.109681432 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct 08 16:36:51 compute-0 nova_compute[117413]: 2025-10-08 16:36:51.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:52 compute-0 nova_compute[117413]: 2025-10-08 16:36:52.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:56 compute-0 nova_compute[117413]: 2025-10-08 16:36:56.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:57 compute-0 nova_compute[117413]: 2025-10-08 16:36:57.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:36:59 compute-0 podman[150623]: 2025-10-08 16:36:59.478089424 +0000 UTC m=+0.075934724 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Oct 08 16:36:59 compute-0 podman[127881]: time="2025-10-08T16:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:36:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:36:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3025 "" "Go-http-client/1.1"
Oct 08 16:37:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:01.288 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:37:01 compute-0 nova_compute[117413]: 2025-10-08 16:37:01.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:01.289 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:37:01 compute-0 openstack_network_exporter[130039]: ERROR   16:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:37:01 compute-0 openstack_network_exporter[130039]: ERROR   16:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:37:01 compute-0 openstack_network_exporter[130039]: ERROR   16:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:37:01 compute-0 openstack_network_exporter[130039]: ERROR   16:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:37:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:37:01 compute-0 openstack_network_exporter[130039]: ERROR   16:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:37:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:37:01 compute-0 nova_compute[117413]: 2025-10-08 16:37:01.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:02 compute-0 nova_compute[117413]: 2025-10-08 16:37:02.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:04.291 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:37:05 compute-0 podman[150645]: 2025-10-08 16:37:05.46700843 +0000 UTC m=+0.068468167 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 08 16:37:06 compute-0 nova_compute[117413]: 2025-10-08 16:37:06.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:07 compute-0 nova_compute[117413]: 2025-10-08 16:37:07.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:10 compute-0 nova_compute[117413]: 2025-10-08 16:37:10.867 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:37:11 compute-0 nova_compute[117413]: 2025-10-08 16:37:11.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:37:11 compute-0 podman[150666]: 2025-10-08 16:37:11.471313122 +0000 UTC m=+0.075828641 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4)
Oct 08 16:37:11 compute-0 nova_compute[117413]: 2025-10-08 16:37:11.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:11 compute-0 nova_compute[117413]: 2025-10-08 16:37:11.881 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:37:11 compute-0 nova_compute[117413]: 2025-10-08 16:37:11.882 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:37:11 compute-0 nova_compute[117413]: 2025-10-08 16:37:11.882 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:37:11 compute-0 nova_compute[117413]: 2025-10-08 16:37:11.882 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:37:12 compute-0 nova_compute[117413]: 2025-10-08 16:37:12.062 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:37:12 compute-0 nova_compute[117413]: 2025-10-08 16:37:12.063 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:37:12 compute-0 nova_compute[117413]: 2025-10-08 16:37:12.091 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.028s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:37:12 compute-0 nova_compute[117413]: 2025-10-08 16:37:12.092 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6181MB free_disk=73.25062942504883GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:37:12 compute-0 nova_compute[117413]: 2025-10-08 16:37:12.092 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:37:12 compute-0 nova_compute[117413]: 2025-10-08 16:37:12.093 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:37:12 compute-0 nova_compute[117413]: 2025-10-08 16:37:12.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:13 compute-0 nova_compute[117413]: 2025-10-08 16:37:13.150 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:37:13 compute-0 nova_compute[117413]: 2025-10-08 16:37:13.151 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:37:12 up 45 min,  0 user,  load average: 0.03, 0.09, 0.17\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:37:13 compute-0 nova_compute[117413]: 2025-10-08 16:37:13.179 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:37:13 compute-0 podman[150689]: 2025-10-08 16:37:13.46166605 +0000 UTC m=+0.066420346 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 08 16:37:13 compute-0 nova_compute[117413]: 2025-10-08 16:37:13.687 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:37:14 compute-0 nova_compute[117413]: 2025-10-08 16:37:14.196 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:37:14 compute-0 nova_compute[117413]: 2025-10-08 16:37:14.196 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.103s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:37:15 compute-0 nova_compute[117413]: 2025-10-08 16:37:15.196 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:37:15 compute-0 nova_compute[117413]: 2025-10-08 16:37:15.197 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:37:15 compute-0 nova_compute[117413]: 2025-10-08 16:37:15.197 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:37:15 compute-0 nova_compute[117413]: 2025-10-08 16:37:15.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:37:16 compute-0 nova_compute[117413]: 2025-10-08 16:37:16.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:37:16 compute-0 nova_compute[117413]: 2025-10-08 16:37:16.588 2 DEBUG nova.virt.libvirt.driver [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Creating tmpfile /var/lib/nova/instances/tmpqs0bxd6e to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10944
Oct 08 16:37:16 compute-0 nova_compute[117413]: 2025-10-08 16:37:16.590 2 WARNING neutronclient.v2_0.client [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:37:16 compute-0 nova_compute[117413]: 2025-10-08 16:37:16.596 2 DEBUG nova.compute.manager [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpqs0bxd6e',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.12/site-packages/nova/compute/manager.py:9086
Oct 08 16:37:16 compute-0 nova_compute[117413]: 2025-10-08 16:37:16.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:17 compute-0 nova_compute[117413]: 2025-10-08 16:37:17.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:37:17 compute-0 nova_compute[117413]: 2025-10-08 16:37:17.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:18 compute-0 podman[150708]: 2025-10-08 16:37:18.46120635 +0000 UTC m=+0.063506943 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:37:18 compute-0 podman[150709]: 2025-10-08 16:37:18.490345595 +0000 UTC m=+0.095167311 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 08 16:37:18 compute-0 nova_compute[117413]: 2025-10-08 16:37:18.631 2 WARNING neutronclient.v2_0.client [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:37:20 compute-0 nova_compute[117413]: 2025-10-08 16:37:20.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:37:21 compute-0 nova_compute[117413]: 2025-10-08 16:37:21.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:22 compute-0 nova_compute[117413]: 2025-10-08 16:37:22.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:23 compute-0 nova_compute[117413]: 2025-10-08 16:37:23.117 2 DEBUG nova.compute.manager [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpqs0bxd6e',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='dd8c9e9c-275f-4be0-b687-0cf9f1d6f061',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9311
Oct 08 16:37:24 compute-0 nova_compute[117413]: 2025-10-08 16:37:24.139 2 DEBUG oslo_concurrency.lockutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-dd8c9e9c-275f-4be0-b687-0cf9f1d6f061" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:37:24 compute-0 nova_compute[117413]: 2025-10-08 16:37:24.141 2 DEBUG oslo_concurrency.lockutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-dd8c9e9c-275f-4be0-b687-0cf9f1d6f061" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:37:24 compute-0 nova_compute[117413]: 2025-10-08 16:37:24.141 2 DEBUG nova.network.neutron [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:37:24 compute-0 nova_compute[117413]: 2025-10-08 16:37:24.680 2 WARNING neutronclient.v2_0.client [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:37:25 compute-0 nova_compute[117413]: 2025-10-08 16:37:25.188 2 WARNING neutronclient.v2_0.client [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:37:25 compute-0 nova_compute[117413]: 2025-10-08 16:37:25.367 2 DEBUG nova.network.neutron [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Updating instance_info_cache with network_info: [{"id": "dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2", "address": "fa:16:3e:50:c2:ac", "network": {"id": "4c3c839c-22dd-4557-90e7-00c4261e25fa", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-255323703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fa179725d8340b1be8823a581335be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdbe9cf6e-8c", "ovs_interfaceid": "dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:37:25 compute-0 nova_compute[117413]: 2025-10-08 16:37:25.875 2 DEBUG oslo_concurrency.lockutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-dd8c9e9c-275f-4be0-b687-0cf9f1d6f061" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:37:25 compute-0 nova_compute[117413]: 2025-10-08 16:37:25.892 2 DEBUG nova.virt.libvirt.driver [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpqs0bxd6e',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='dd8c9e9c-275f-4be0-b687-0cf9f1d6f061',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11737
Oct 08 16:37:25 compute-0 nova_compute[117413]: 2025-10-08 16:37:25.893 2 DEBUG nova.virt.libvirt.driver [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Creating instance directory: /var/lib/nova/instances/dd8c9e9c-275f-4be0-b687-0cf9f1d6f061 pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11750
Oct 08 16:37:25 compute-0 nova_compute[117413]: 2025-10-08 16:37:25.893 2 DEBUG nova.virt.libvirt.driver [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Creating disk.info with the contents: {'/var/lib/nova/instances/dd8c9e9c-275f-4be0-b687-0cf9f1d6f061/disk': 'qcow2', '/var/lib/nova/instances/dd8c9e9c-275f-4be0-b687-0cf9f1d6f061/disk.config': 'raw'} pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11764
Oct 08 16:37:25 compute-0 nova_compute[117413]: 2025-10-08 16:37:25.894 2 DEBUG nova.virt.libvirt.driver [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11774
Oct 08 16:37:25 compute-0 nova_compute[117413]: 2025-10-08 16:37:25.895 2 DEBUG nova.objects.instance [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'trusted_certs' on Instance uuid dd8c9e9c-275f-4be0-b687-0cf9f1d6f061 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:37:26 compute-0 nova_compute[117413]: 2025-10-08 16:37:26.405 2 DEBUG oslo_utils.imageutils.format_inspector [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:37:26 compute-0 nova_compute[117413]: 2025-10-08 16:37:26.410 2 DEBUG oslo_utils.imageutils.format_inspector [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:37:26 compute-0 nova_compute[117413]: 2025-10-08 16:37:26.412 2 DEBUG oslo_concurrency.processutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:37:26 compute-0 nova_compute[117413]: 2025-10-08 16:37:26.506 2 DEBUG oslo_concurrency.processutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:37:26 compute-0 nova_compute[117413]: 2025-10-08 16:37:26.507 2 DEBUG oslo_concurrency.lockutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:37:26 compute-0 nova_compute[117413]: 2025-10-08 16:37:26.508 2 DEBUG oslo_concurrency.lockutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:37:26 compute-0 nova_compute[117413]: 2025-10-08 16:37:26.509 2 DEBUG oslo_utils.imageutils.format_inspector [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:37:26 compute-0 nova_compute[117413]: 2025-10-08 16:37:26.513 2 DEBUG oslo_utils.imageutils.format_inspector [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:37:26 compute-0 nova_compute[117413]: 2025-10-08 16:37:26.513 2 DEBUG oslo_concurrency.processutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:37:26 compute-0 nova_compute[117413]: 2025-10-08 16:37:26.585 2 DEBUG oslo_concurrency.processutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:37:26 compute-0 nova_compute[117413]: 2025-10-08 16:37:26.586 2 DEBUG oslo_concurrency.processutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/dd8c9e9c-275f-4be0-b687-0cf9f1d6f061/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:37:26 compute-0 nova_compute[117413]: 2025-10-08 16:37:26.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:26 compute-0 nova_compute[117413]: 2025-10-08 16:37:26.650 2 DEBUG oslo_concurrency.processutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/dd8c9e9c-275f-4be0-b687-0cf9f1d6f061/disk 1073741824" returned: 0 in 0.064s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:37:26 compute-0 nova_compute[117413]: 2025-10-08 16:37:26.652 2 DEBUG oslo_concurrency.lockutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.144s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:37:26 compute-0 nova_compute[117413]: 2025-10-08 16:37:26.653 2 DEBUG oslo_concurrency.processutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:37:26 compute-0 nova_compute[117413]: 2025-10-08 16:37:26.716 2 DEBUG oslo_concurrency.processutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:37:26 compute-0 nova_compute[117413]: 2025-10-08 16:37:26.718 2 DEBUG nova.virt.disk.api [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Checking if we can resize image /var/lib/nova/instances/dd8c9e9c-275f-4be0-b687-0cf9f1d6f061/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:37:26 compute-0 nova_compute[117413]: 2025-10-08 16:37:26.719 2 DEBUG oslo_concurrency.processutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd8c9e9c-275f-4be0-b687-0cf9f1d6f061/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:37:26 compute-0 nova_compute[117413]: 2025-10-08 16:37:26.785 2 DEBUG oslo_concurrency.processutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd8c9e9c-275f-4be0-b687-0cf9f1d6f061/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:37:26 compute-0 nova_compute[117413]: 2025-10-08 16:37:26.787 2 DEBUG nova.virt.disk.api [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Cannot resize image /var/lib/nova/instances/dd8c9e9c-275f-4be0-b687-0cf9f1d6f061/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:37:26 compute-0 nova_compute[117413]: 2025-10-08 16:37:26.787 2 DEBUG nova.objects.instance [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'migration_context' on Instance uuid dd8c9e9c-275f-4be0-b687-0cf9f1d6f061 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.295 2 DEBUG nova.objects.base [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Object Instance<dd8c9e9c-275f-4be0-b687-0cf9f1d6f061> lazy-loaded attributes: trusted_certs,migration_context wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.296 2 DEBUG oslo_concurrency.processutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/dd8c9e9c-275f-4be0-b687-0cf9f1d6f061/disk.config 497664 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.329 2 DEBUG oslo_concurrency.processutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/dd8c9e9c-275f-4be0-b687-0cf9f1d6f061/disk.config 497664" returned: 0 in 0.033s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.330 2 DEBUG nova.virt.libvirt.driver [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11704
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.332 2 DEBUG nova.virt.libvirt.vif [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-08T16:36:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-1260763685',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-1260763685',id=24,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:36:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=1151,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='096bcdb2ee9d4587b808e167326cbd88',ramdisk_id='',reservation_id='r-03bhfkgc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-425573729',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-425573729-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:36:42Z,user_data=None,user_id='4a25bc3a607548039426bbcfe6b35524',uuid=dd8c9e9c-275f-4be0-b687-0cf9f1d6f061,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2", "address": "fa:16:3e:50:c2:ac", "network": {"id": "4c3c839c-22dd-4557-90e7-00c4261e25fa", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-255323703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fa179725d8340b1be8823a581335be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapdbe9cf6e-8c", "ovs_interfaceid": "dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.332 2 DEBUG nova.network.os_vif_util [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converting VIF {"id": "dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2", "address": "fa:16:3e:50:c2:ac", "network": {"id": "4c3c839c-22dd-4557-90e7-00c4261e25fa", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-255323703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fa179725d8340b1be8823a581335be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapdbe9cf6e-8c", "ovs_interfaceid": "dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.333 2 DEBUG nova.network.os_vif_util [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:c2:ac,bridge_name='br-int',has_traffic_filtering=True,id=dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2,network=Network(4c3c839c-22dd-4557-90e7-00c4261e25fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdbe9cf6e-8c') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.334 2 DEBUG os_vif [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:c2:ac,bridge_name='br-int',has_traffic_filtering=True,id=dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2,network=Network(4c3c839c-22dd-4557-90e7-00c4261e25fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdbe9cf6e-8c') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.336 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.337 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '06bb287f-82bf-5bfb-b296-de4b5515b662', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.343 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdbe9cf6e-8c, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.344 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tapdbe9cf6e-8c, col_values=(('qos', UUID('425254c7-499a-4b55-bcfa-7c4b802b273d')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.344 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tapdbe9cf6e-8c, col_values=(('external_ids', {'iface-id': 'dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:50:c2:ac', 'vm-uuid': 'dd8c9e9c-275f-4be0-b687-0cf9f1d6f061'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:27 compute-0 NetworkManager[1034]: <info>  [1759941447.3469] manager: (tapdbe9cf6e-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.355 2 INFO os_vif [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:c2:ac,bridge_name='br-int',has_traffic_filtering=True,id=dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2,network=Network(4c3c839c-22dd-4557-90e7-00c4261e25fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdbe9cf6e-8c')
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.356 2 DEBUG nova.virt.libvirt.driver [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11851
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.356 2 DEBUG nova.compute.manager [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpqs0bxd6e',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='dd8c9e9c-275f-4be0-b687-0cf9f1d6f061',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9377
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.357 2 WARNING neutronclient.v2_0.client [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.423 2 WARNING neutronclient.v2_0.client [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.948 2 DEBUG nova.network.neutron [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Port dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.12/site-packages/nova/network/neutron.py:356
Oct 08 16:37:27 compute-0 nova_compute[117413]: 2025-10-08 16:37:27.964 2 DEBUG nova.compute.manager [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpqs0bxd6e',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='dd8c9e9c-275f-4be0-b687-0cf9f1d6f061',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9443
Oct 08 16:37:29 compute-0 podman[127881]: time="2025-10-08T16:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:37:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:37:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3019 "" "Go-http-client/1.1"
Oct 08 16:37:30 compute-0 podman[150776]: 2025-10-08 16:37:30.490450272 +0000 UTC m=+0.094867113 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 08 16:37:31 compute-0 ovn_controller[19768]: 2025-10-08T16:37:31Z|00200|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 08 16:37:31 compute-0 openstack_network_exporter[130039]: ERROR   16:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:37:31 compute-0 openstack_network_exporter[130039]: ERROR   16:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:37:31 compute-0 openstack_network_exporter[130039]: ERROR   16:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:37:31 compute-0 openstack_network_exporter[130039]: ERROR   16:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:37:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:37:31 compute-0 openstack_network_exporter[130039]: ERROR   16:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:37:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:37:31 compute-0 nova_compute[117413]: 2025-10-08 16:37:31.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:31 compute-0 kernel: tapdbe9cf6e-8c: entered promiscuous mode
Oct 08 16:37:31 compute-0 NetworkManager[1034]: <info>  [1759941451.8975] manager: (tapdbe9cf6e-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Oct 08 16:37:31 compute-0 ovn_controller[19768]: 2025-10-08T16:37:31Z|00201|binding|INFO|Claiming lport dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 for this additional chassis.
Oct 08 16:37:31 compute-0 ovn_controller[19768]: 2025-10-08T16:37:31Z|00202|binding|INFO|dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2: Claiming fa:16:3e:50:c2:ac 10.100.0.14
Oct 08 16:37:31 compute-0 nova_compute[117413]: 2025-10-08 16:37:31.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:31 compute-0 nova_compute[117413]: 2025-10-08 16:37:31.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:31 compute-0 ovn_controller[19768]: 2025-10-08T16:37:31Z|00203|binding|INFO|Setting lport dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 ovn-installed in OVS
Oct 08 16:37:31 compute-0 nova_compute[117413]: 2025-10-08 16:37:31.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:31 compute-0 nova_compute[117413]: 2025-10-08 16:37:31.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:31 compute-0 systemd-udevd[150810]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:37:31 compute-0 NetworkManager[1034]: <info>  [1759941451.9447] device (tapdbe9cf6e-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:37:31 compute-0 NetworkManager[1034]: <info>  [1759941451.9458] device (tapdbe9cf6e-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:37:31 compute-0 systemd-machined[77548]: New machine qemu-18-instance-00000018.
Oct 08 16:37:31 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000018.
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.118 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:c2:ac 10.100.0.14'], port_security=['fa:16:3e:50:c2:ac 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'dd8c9e9c-275f-4be0-b687-0cf9f1d6f061', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '096bcdb2ee9d4587b808e167326cbd88', 'neutron:revision_number': '10', 'neutron:security_group_ids': '4e4677a0-1ad0-4540-9c8b-4dd14348f18d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3784f5f5-20ff-445d-bccb-63841b0639a2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2) old=Port_Binding(additional_chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.119 28633 INFO neutron.agent.ovn.metadata.agent [-] Port dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 in datapath 4c3c839c-22dd-4557-90e7-00c4261e25fa unbound from our chassis
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.120 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4c3c839c-22dd-4557-90e7-00c4261e25fa
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.134 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[2d7459ab-1c1c-4c52-af7e-e646064d20dd]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.135 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4c3c839c-21 in ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.137 139805 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4c3c839c-20 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.137 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[4c496340-e19b-47af-8be2-43808c34f8dd]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.138 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[6c4b8062-7c08-4a87-9dab-96d59f475a76]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.155 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[19be8876-42e2-4e1e-88ca-101cf5efe4be]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.173 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b4ee44-1bb7-4f71-8d77-7be6ced5ba0b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.207 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[dc6d76cc-729a-4522-8512-6eaa27a42dd5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.213 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[ef09df84-c438-4196-9c72-2bc9b419e595]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:32 compute-0 systemd-udevd[150814]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:37:32 compute-0 NetworkManager[1034]: <info>  [1759941452.2148] manager: (tap4c3c839c-20): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.259 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[a9fef820-b99e-49ef-8d7f-25bfb8aa7250]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.262 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[f796d5e9-75cb-4590-b145-85c7ae9649c5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:32 compute-0 NetworkManager[1034]: <info>  [1759941452.2897] device (tap4c3c839c-20): carrier: link connected
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.299 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf4f999-0813-42fb-a611-afbd888e7d2f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.319 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e624bbf1-f5ec-46a0-92fe-2e6294988e63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c3c839c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:54:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 274098, 'reachable_time': 39600, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 150852, 'error': None, 'target': 'ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.338 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ff8eb7-3325-4e59-834a-17f276b5dbc9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe25:545f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 274098, 'tstamp': 274098}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 150853, 'error': None, 'target': 'ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:32 compute-0 nova_compute[117413]: 2025-10-08 16:37:32.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.360 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[f336a74e-0f18-4871-b979-aaf88634432a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c3c839c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:54:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 274098, 'reachable_time': 39600, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 150854, 'error': None, 'target': 'ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.406 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[2ddcb283-660b-47c1-bd95-b594ab9f0642]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.484 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[045aed5b-6dfd-47d9-8bcb-759ded5ce667]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.486 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c3c839c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.486 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.486 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c3c839c-20, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:37:32 compute-0 kernel: tap4c3c839c-20: entered promiscuous mode
Oct 08 16:37:32 compute-0 nova_compute[117413]: 2025-10-08 16:37:32.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:32 compute-0 NetworkManager[1034]: <info>  [1759941452.4904] manager: (tap4c3c839c-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Oct 08 16:37:32 compute-0 nova_compute[117413]: 2025-10-08 16:37:32.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.492 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4c3c839c-20, col_values=(('external_ids', {'iface-id': '736ca924-b754-456e-b784-63a1a480c78a'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:37:32 compute-0 nova_compute[117413]: 2025-10-08 16:37:32.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:32 compute-0 ovn_controller[19768]: 2025-10-08T16:37:32Z|00204|binding|INFO|Releasing lport 736ca924-b754-456e-b784-63a1a480c78a from this chassis (sb_readonly=0)
Oct 08 16:37:32 compute-0 nova_compute[117413]: 2025-10-08 16:37:32.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.495 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[f88e1f05-74ae-4602-9957-7d6bf4b4d881]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.496 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.496 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.496 28633 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 4c3c839c-22dd-4557-90e7-00c4261e25fa disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.496 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.497 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[9596582f-afcf-4c3d-8518-6489ff33ee50]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.497 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.498 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[fbda3de1-6c63-43dc-a4e5-9647ec228568]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.498 28633 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: global
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     log         /dev/log local0 debug
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     log-tag     haproxy-metadata-proxy-4c3c839c-22dd-4557-90e7-00c4261e25fa
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     user        root
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     group       root
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     maxconn     1024
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     pidfile     /var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     daemon
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: defaults
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     log global
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     mode http
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     option httplog
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     option dontlognull
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     option http-server-close
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     option forwardfor
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     retries                 3
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     timeout http-request    30s
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     timeout connect         30s
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     timeout client          32s
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     timeout server          32s
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     timeout http-keep-alive 30s
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: listen listener
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     bind 169.254.169.254:80
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     server metadata /var/lib/neutron/metadata_proxy
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:     http-request add-header X-OVN-Network-ID 4c3c839c-22dd-4557-90e7-00c4261e25fa
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 08 16:37:32 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:32.499 28633 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'env', 'PROCESS_TAG=haproxy-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4c3c839c-22dd-4557-90e7-00c4261e25fa.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 08 16:37:32 compute-0 nova_compute[117413]: 2025-10-08 16:37:32.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:32 compute-0 podman[150886]: 2025-10-08 16:37:32.908888777 +0000 UTC m=+0.059119906 container create a17fc8808b9d72c72e83d19d67c700d7f53aa640a8de3c3335da7ca56c30ad8a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007)
Oct 08 16:37:32 compute-0 systemd[1]: Started libpod-conmon-a17fc8808b9d72c72e83d19d67c700d7f53aa640a8de3c3335da7ca56c30ad8a.scope.
Oct 08 16:37:32 compute-0 podman[150886]: 2025-10-08 16:37:32.875106957 +0000 UTC m=+0.025338136 image pull 1b705be0a2473f9551d4f3571c1e8fc1b0bd84e013684239de53078e70a4b6e3 38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 08 16:37:32 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:37:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0be59071dc9976c85057d71ebe2a2ea2a5d8190c4360b5f52a330eda7f51500f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 08 16:37:33 compute-0 podman[150886]: 2025-10-08 16:37:33.010632428 +0000 UTC m=+0.160863637 container init a17fc8808b9d72c72e83d19d67c700d7f53aa640a8de3c3335da7ca56c30ad8a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:37:33 compute-0 podman[150886]: 2025-10-08 16:37:33.025253942 +0000 UTC m=+0.175485101 container start a17fc8808b9d72c72e83d19d67c700d7f53aa640a8de3c3335da7ca56c30ad8a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 08 16:37:33 compute-0 neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa[150901]: [NOTICE]   (150905) : New worker (150907) forked
Oct 08 16:37:33 compute-0 neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa[150901]: [NOTICE]   (150905) : Loading success.
Oct 08 16:37:34 compute-0 ovn_controller[19768]: 2025-10-08T16:37:34Z|00205|binding|INFO|Claiming lport dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 for this chassis.
Oct 08 16:37:34 compute-0 ovn_controller[19768]: 2025-10-08T16:37:34Z|00206|binding|INFO|dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2: Claiming fa:16:3e:50:c2:ac 10.100.0.14
Oct 08 16:37:34 compute-0 ovn_controller[19768]: 2025-10-08T16:37:34Z|00207|binding|INFO|Setting lport dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 up in Southbound
Oct 08 16:37:35 compute-0 nova_compute[117413]: 2025-10-08 16:37:35.651 2 INFO nova.compute.manager [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Post operation of migration started
Oct 08 16:37:35 compute-0 nova_compute[117413]: 2025-10-08 16:37:35.653 2 WARNING neutronclient.v2_0.client [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:37:36 compute-0 nova_compute[117413]: 2025-10-08 16:37:36.219 2 WARNING neutronclient.v2_0.client [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:37:36 compute-0 nova_compute[117413]: 2025-10-08 16:37:36.219 2 WARNING neutronclient.v2_0.client [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:37:36 compute-0 nova_compute[117413]: 2025-10-08 16:37:36.321 2 DEBUG oslo_concurrency.lockutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-dd8c9e9c-275f-4be0-b687-0cf9f1d6f061" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:37:36 compute-0 nova_compute[117413]: 2025-10-08 16:37:36.322 2 DEBUG oslo_concurrency.lockutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-dd8c9e9c-275f-4be0-b687-0cf9f1d6f061" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:37:36 compute-0 nova_compute[117413]: 2025-10-08 16:37:36.322 2 DEBUG nova.network.neutron [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:37:36 compute-0 podman[150929]: 2025-10-08 16:37:36.469045967 +0000 UTC m=+0.070910638 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, version=9.6, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc.)
Oct 08 16:37:36 compute-0 nova_compute[117413]: 2025-10-08 16:37:36.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:36 compute-0 nova_compute[117413]: 2025-10-08 16:37:36.832 2 WARNING neutronclient.v2_0.client [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:37:37 compute-0 nova_compute[117413]: 2025-10-08 16:37:37.335 2 WARNING neutronclient.v2_0.client [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:37:37 compute-0 nova_compute[117413]: 2025-10-08 16:37:37.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:37 compute-0 nova_compute[117413]: 2025-10-08 16:37:37.555 2 DEBUG nova.network.neutron [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Updating instance_info_cache with network_info: [{"id": "dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2", "address": "fa:16:3e:50:c2:ac", "network": {"id": "4c3c839c-22dd-4557-90e7-00c4261e25fa", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-255323703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fa179725d8340b1be8823a581335be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdbe9cf6e-8c", "ovs_interfaceid": "dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:37:38 compute-0 nova_compute[117413]: 2025-10-08 16:37:38.062 2 DEBUG oslo_concurrency.lockutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-dd8c9e9c-275f-4be0-b687-0cf9f1d6f061" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:37:38 compute-0 nova_compute[117413]: 2025-10-08 16:37:38.586 2 DEBUG oslo_concurrency.lockutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:37:38 compute-0 nova_compute[117413]: 2025-10-08 16:37:38.587 2 DEBUG oslo_concurrency.lockutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:37:38 compute-0 nova_compute[117413]: 2025-10-08 16:37:38.587 2 DEBUG oslo_concurrency.lockutils [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:37:38 compute-0 nova_compute[117413]: 2025-10-08 16:37:38.593 2 INFO nova.virt.libvirt.driver [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 08 16:37:38 compute-0 virtqemud[117740]: Domain id=18 name='instance-00000018' uuid=dd8c9e9c-275f-4be0-b687-0cf9f1d6f061 is tainted: custom-monitor
Oct 08 16:37:39 compute-0 nova_compute[117413]: 2025-10-08 16:37:39.602 2 INFO nova.virt.libvirt.driver [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 08 16:37:40 compute-0 nova_compute[117413]: 2025-10-08 16:37:40.610 2 INFO nova.virt.libvirt.driver [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 08 16:37:40 compute-0 nova_compute[117413]: 2025-10-08 16:37:40.615 2 DEBUG nova.compute.manager [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:37:41 compute-0 nova_compute[117413]: 2025-10-08 16:37:41.126 2 DEBUG nova.objects.instance [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Oct 08 16:37:41 compute-0 nova_compute[117413]: 2025-10-08 16:37:41.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:41.925 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:37:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:41.926 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:37:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:41.926 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:37:42 compute-0 nova_compute[117413]: 2025-10-08 16:37:42.143 2 WARNING neutronclient.v2_0.client [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:37:42 compute-0 nova_compute[117413]: 2025-10-08 16:37:42.244 2 WARNING neutronclient.v2_0.client [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:37:42 compute-0 nova_compute[117413]: 2025-10-08 16:37:42.245 2 WARNING neutronclient.v2_0.client [None req-7ccc4c1a-f4f6-4367-b10f-fb692993118a ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:37:42 compute-0 nova_compute[117413]: 2025-10-08 16:37:42.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:42 compute-0 podman[150951]: 2025-10-08 16:37:42.474915724 +0000 UTC m=+0.071646179 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=iscsid)
Oct 08 16:37:44 compute-0 podman[150972]: 2025-10-08 16:37:44.517160589 +0000 UTC m=+0.115490360 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=watcher_latest)
Oct 08 16:37:46 compute-0 nova_compute[117413]: 2025-10-08 16:37:46.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:47 compute-0 nova_compute[117413]: 2025-10-08 16:37:47.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:49 compute-0 podman[150992]: 2025-10-08 16:37:49.454770388 +0000 UTC m=+0.056097768 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:37:49 compute-0 podman[150993]: 2025-10-08 16:37:49.500949297 +0000 UTC m=+0.096218941 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:37:51 compute-0 nova_compute[117413]: 2025-10-08 16:37:51.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:52 compute-0 nova_compute[117413]: 2025-10-08 16:37:52.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:53 compute-0 nova_compute[117413]: 2025-10-08 16:37:53.839 2 DEBUG oslo_concurrency.lockutils [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Acquiring lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:37:53 compute-0 nova_compute[117413]: 2025-10-08 16:37:53.840 2 DEBUG oslo_concurrency.lockutils [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:37:53 compute-0 nova_compute[117413]: 2025-10-08 16:37:53.841 2 DEBUG oslo_concurrency.lockutils [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Acquiring lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:37:53 compute-0 nova_compute[117413]: 2025-10-08 16:37:53.841 2 DEBUG oslo_concurrency.lockutils [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:37:53 compute-0 nova_compute[117413]: 2025-10-08 16:37:53.841 2 DEBUG oslo_concurrency.lockutils [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:37:53 compute-0 nova_compute[117413]: 2025-10-08 16:37:53.855 2 INFO nova.compute.manager [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Terminating instance
Oct 08 16:37:54 compute-0 nova_compute[117413]: 2025-10-08 16:37:54.370 2 DEBUG nova.compute.manager [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:37:54 compute-0 kernel: tapdbe9cf6e-8c (unregistering): left promiscuous mode
Oct 08 16:37:54 compute-0 NetworkManager[1034]: <info>  [1759941474.3974] device (tapdbe9cf6e-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:37:54 compute-0 nova_compute[117413]: 2025-10-08 16:37:54.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:54 compute-0 ovn_controller[19768]: 2025-10-08T16:37:54Z|00208|binding|INFO|Releasing lport dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 from this chassis (sb_readonly=0)
Oct 08 16:37:54 compute-0 ovn_controller[19768]: 2025-10-08T16:37:54Z|00209|binding|INFO|Setting lport dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 down in Southbound
Oct 08 16:37:54 compute-0 ovn_controller[19768]: 2025-10-08T16:37:54Z|00210|binding|INFO|Removing iface tapdbe9cf6e-8c ovn-installed in OVS
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.423 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:c2:ac 10.100.0.14'], port_security=['fa:16:3e:50:c2:ac 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'dd8c9e9c-275f-4be0-b687-0cf9f1d6f061', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '096bcdb2ee9d4587b808e167326cbd88', 'neutron:revision_number': '15', 'neutron:security_group_ids': '4e4677a0-1ad0-4540-9c8b-4dd14348f18d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3784f5f5-20ff-445d-bccb-63841b0639a2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.425 28633 INFO neutron.agent.ovn.metadata.agent [-] Port dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 in datapath 4c3c839c-22dd-4557-90e7-00c4261e25fa unbound from our chassis
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.426 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c3c839c-22dd-4557-90e7-00c4261e25fa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.427 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[d1073518-c289-4ea5-8063-d02ec1194490]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.427 28633 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa namespace which is not needed anymore
Oct 08 16:37:54 compute-0 nova_compute[117413]: 2025-10-08 16:37:54.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:54 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000018.scope: Deactivated successfully.
Oct 08 16:37:54 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000018.scope: Consumed 2.706s CPU time.
Oct 08 16:37:54 compute-0 systemd-machined[77548]: Machine qemu-18-instance-00000018 terminated.
Oct 08 16:37:54 compute-0 nova_compute[117413]: 2025-10-08 16:37:54.557 2 DEBUG nova.compute.manager [req-57695196-73ab-4683-813c-602ae924544d req-2286fd9a-4466-4489-85c6-6bf42e0c518f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Received event network-vif-unplugged-dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:37:54 compute-0 nova_compute[117413]: 2025-10-08 16:37:54.558 2 DEBUG oslo_concurrency.lockutils [req-57695196-73ab-4683-813c-602ae924544d req-2286fd9a-4466-4489-85c6-6bf42e0c518f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:37:54 compute-0 nova_compute[117413]: 2025-10-08 16:37:54.558 2 DEBUG oslo_concurrency.lockutils [req-57695196-73ab-4683-813c-602ae924544d req-2286fd9a-4466-4489-85c6-6bf42e0c518f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:37:54 compute-0 nova_compute[117413]: 2025-10-08 16:37:54.558 2 DEBUG oslo_concurrency.lockutils [req-57695196-73ab-4683-813c-602ae924544d req-2286fd9a-4466-4489-85c6-6bf42e0c518f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:37:54 compute-0 nova_compute[117413]: 2025-10-08 16:37:54.559 2 DEBUG nova.compute.manager [req-57695196-73ab-4683-813c-602ae924544d req-2286fd9a-4466-4489-85c6-6bf42e0c518f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] No waiting events found dispatching network-vif-unplugged-dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:37:54 compute-0 nova_compute[117413]: 2025-10-08 16:37:54.559 2 DEBUG nova.compute.manager [req-57695196-73ab-4683-813c-602ae924544d req-2286fd9a-4466-4489-85c6-6bf42e0c518f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Received event network-vif-unplugged-dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:37:54 compute-0 neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa[150901]: [NOTICE]   (150905) : haproxy version is 3.0.5-8e879a5
Oct 08 16:37:54 compute-0 neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa[150901]: [NOTICE]   (150905) : path to executable is /usr/sbin/haproxy
Oct 08 16:37:54 compute-0 neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa[150901]: [WARNING]  (150905) : Exiting Master process...
Oct 08 16:37:54 compute-0 podman[151068]: 2025-10-08 16:37:54.570115807 +0000 UTC m=+0.036114368 container kill a17fc8808b9d72c72e83d19d67c700d7f53aa640a8de3c3335da7ca56c30ad8a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 08 16:37:54 compute-0 neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa[150901]: [ALERT]    (150905) : Current worker (150907) exited with code 143 (Terminated)
Oct 08 16:37:54 compute-0 neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa[150901]: [WARNING]  (150905) : All workers exited. Exiting... (0)
Oct 08 16:37:54 compute-0 systemd[1]: libpod-a17fc8808b9d72c72e83d19d67c700d7f53aa640a8de3c3335da7ca56c30ad8a.scope: Deactivated successfully.
Oct 08 16:37:54 compute-0 kernel: tapdbe9cf6e-8c: entered promiscuous mode
Oct 08 16:37:54 compute-0 nova_compute[117413]: 2025-10-08 16:37:54.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:54 compute-0 systemd-udevd[151046]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:37:54 compute-0 NetworkManager[1034]: <info>  [1759941474.6045] manager: (tapdbe9cf6e-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Oct 08 16:37:54 compute-0 ovn_controller[19768]: 2025-10-08T16:37:54Z|00211|binding|INFO|Claiming lport dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 for this chassis.
Oct 08 16:37:54 compute-0 ovn_controller[19768]: 2025-10-08T16:37:54Z|00212|binding|INFO|dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2: Claiming fa:16:3e:50:c2:ac 10.100.0.14
Oct 08 16:37:54 compute-0 kernel: tapdbe9cf6e-8c (unregistering): left promiscuous mode
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.619 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:c2:ac 10.100.0.14'], port_security=['fa:16:3e:50:c2:ac 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'dd8c9e9c-275f-4be0-b687-0cf9f1d6f061', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '096bcdb2ee9d4587b808e167326cbd88', 'neutron:revision_number': '17', 'neutron:security_group_ids': '4e4677a0-1ad0-4540-9c8b-4dd14348f18d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3784f5f5-20ff-445d-bccb-63841b0639a2, chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:37:54 compute-0 podman[151083]: 2025-10-08 16:37:54.635775812 +0000 UTC m=+0.041423473 container died a17fc8808b9d72c72e83d19d67c700d7f53aa640a8de3c3335da7ca56c30ad8a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2)
Oct 08 16:37:54 compute-0 ovn_controller[19768]: 2025-10-08T16:37:54Z|00213|binding|INFO|Setting lport dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 ovn-installed in OVS
Oct 08 16:37:54 compute-0 ovn_controller[19768]: 2025-10-08T16:37:54Z|00214|binding|INFO|Setting lport dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 up in Southbound
Oct 08 16:37:54 compute-0 ovn_controller[19768]: 2025-10-08T16:37:54Z|00215|binding|INFO|Releasing lport dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 from this chassis (sb_readonly=1)
Oct 08 16:37:54 compute-0 nova_compute[117413]: 2025-10-08 16:37:54.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:54 compute-0 ovn_controller[19768]: 2025-10-08T16:37:54Z|00216|if_status|INFO|Dropped 2 log messages in last 98 seconds (most recently, 98 seconds ago) due to excessive rate
Oct 08 16:37:54 compute-0 ovn_controller[19768]: 2025-10-08T16:37:54Z|00217|if_status|INFO|Not setting lport dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 down as sb is readonly
Oct 08 16:37:54 compute-0 ovn_controller[19768]: 2025-10-08T16:37:54Z|00218|binding|INFO|Releasing lport dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 from this chassis (sb_readonly=0)
Oct 08 16:37:54 compute-0 ovn_controller[19768]: 2025-10-08T16:37:54Z|00219|binding|INFO|Removing iface tapdbe9cf6e-8c ovn-installed in OVS
Oct 08 16:37:54 compute-0 ovn_controller[19768]: 2025-10-08T16:37:54Z|00220|binding|INFO|Setting lport dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 down in Southbound
Oct 08 16:37:54 compute-0 nova_compute[117413]: 2025-10-08 16:37:54.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.656 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:c2:ac 10.100.0.14'], port_security=['fa:16:3e:50:c2:ac 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'dd8c9e9c-275f-4be0-b687-0cf9f1d6f061', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '096bcdb2ee9d4587b808e167326cbd88', 'neutron:revision_number': '17', 'neutron:security_group_ids': '4e4677a0-1ad0-4540-9c8b-4dd14348f18d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3784f5f5-20ff-445d-bccb-63841b0639a2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:37:54 compute-0 nova_compute[117413]: 2025-10-08 16:37:54.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:54 compute-0 nova_compute[117413]: 2025-10-08 16:37:54.669 2 INFO nova.virt.libvirt.driver [-] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Instance destroyed successfully.
Oct 08 16:37:54 compute-0 nova_compute[117413]: 2025-10-08 16:37:54.670 2 DEBUG nova.objects.instance [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Lazy-loading 'resources' on Instance uuid dd8c9e9c-275f-4be0-b687-0cf9f1d6f061 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:37:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a17fc8808b9d72c72e83d19d67c700d7f53aa640a8de3c3335da7ca56c30ad8a-userdata-shm.mount: Deactivated successfully.
Oct 08 16:37:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-0be59071dc9976c85057d71ebe2a2ea2a5d8190c4360b5f52a330eda7f51500f-merged.mount: Deactivated successfully.
Oct 08 16:37:54 compute-0 podman[151083]: 2025-10-08 16:37:54.690327454 +0000 UTC m=+0.095975085 container cleanup a17fc8808b9d72c72e83d19d67c700d7f53aa640a8de3c3335da7ca56c30ad8a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:37:54 compute-0 systemd[1]: libpod-conmon-a17fc8808b9d72c72e83d19d67c700d7f53aa640a8de3c3335da7ca56c30ad8a.scope: Deactivated successfully.
Oct 08 16:37:54 compute-0 podman[151087]: 2025-10-08 16:37:54.718688197 +0000 UTC m=+0.109764725 container remove a17fc8808b9d72c72e83d19d67c700d7f53aa640a8de3c3335da7ca56c30ad8a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251007)
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.734 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[9e283a42-95f8-446c-9ed9-20bf3cc44f48]: (4, ("Wed Oct  8 04:37:54 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa (a17fc8808b9d72c72e83d19d67c700d7f53aa640a8de3c3335da7ca56c30ad8a)\na17fc8808b9d72c72e83d19d67c700d7f53aa640a8de3c3335da7ca56c30ad8a\nWed Oct  8 04:37:54 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa (a17fc8808b9d72c72e83d19d67c700d7f53aa640a8de3c3335da7ca56c30ad8a)\na17fc8808b9d72c72e83d19d67c700d7f53aa640a8de3c3335da7ca56c30ad8a\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.736 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5707e4-f965-419b-b8c7-6ef25f84dd32]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.736 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c3c839c-22dd-4557-90e7-00c4261e25fa.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.736 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[05b91e11-39aa-48b8-ad08-ce844022fc12]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.737 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c3c839c-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:37:54 compute-0 nova_compute[117413]: 2025-10-08 16:37:54.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:54 compute-0 kernel: tap4c3c839c-20: left promiscuous mode
Oct 08 16:37:54 compute-0 nova_compute[117413]: 2025-10-08 16:37:54.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.786 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[75bbc0e6-c85e-47ab-807c-56b0aab816f7]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.829 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[23cde332-0e21-47fb-bb80-6ea6f7a1abb8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.832 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[52fc84c6-862b-4e91-bb9e-e1dcd1fb4e15]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.850 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[1c3eb9c3-5ebf-4284-b394-fac43d637dc0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 274088, 'reachable_time': 19987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 151131, 'error': None, 'target': 'ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.852 28777 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4c3c839c-22dd-4557-90e7-00c4261e25fa deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.852 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[cf0b82a3-6e0f-44e9-9ed9-7d22a57d3757]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d4c3c839c\x2d22dd\x2d4557\x2d90e7\x2d00c4261e25fa.mount: Deactivated successfully.
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.853 28633 INFO neutron.agent.ovn.metadata.agent [-] Port dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 in datapath 4c3c839c-22dd-4557-90e7-00c4261e25fa unbound from our chassis
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.855 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c3c839c-22dd-4557-90e7-00c4261e25fa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.856 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[0e638c41-70ac-4f47-8b5e-286b9f83f8f2]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.857 28633 INFO neutron.agent.ovn.metadata.agent [-] Port dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 in datapath 4c3c839c-22dd-4557-90e7-00c4261e25fa unbound from our chassis
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.858 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c3c839c-22dd-4557-90e7-00c4261e25fa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:37:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:37:54.859 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[36aafcaf-51a8-4bc9-80d8-157e2485f4b9]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:37:55 compute-0 nova_compute[117413]: 2025-10-08 16:37:55.179 2 DEBUG nova.virt.libvirt.vif [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,compute_id=2,config_drive='True',created_at=2025-10-08T16:36:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-1260763685',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-1260763685',id=24,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:36:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=1151,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='096bcdb2ee9d4587b808e167326cbd88',ramdisk_id='',reservation_id='r-03bhfkgc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member,manager',clean_attempts='1',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-425573729',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-425573729-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:37:41Z,user_data=None,user_id='4a25bc3a607548039426bbcfe6b35524',uuid=dd8c9e9c-275f-4be0-b687-0cf9f1d6f061,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2", "address": "fa:16:3e:50:c2:ac", "network": {"id": "4c3c839c-22dd-4557-90e7-00c4261e25fa", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-255323703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fa179725d8340b1be8823a581335be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdbe9cf6e-8c", "ovs_interfaceid": "dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:37:55 compute-0 nova_compute[117413]: 2025-10-08 16:37:55.180 2 DEBUG nova.network.os_vif_util [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Converting VIF {"id": "dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2", "address": "fa:16:3e:50:c2:ac", "network": {"id": "4c3c839c-22dd-4557-90e7-00c4261e25fa", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-255323703-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8fa179725d8340b1be8823a581335be9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdbe9cf6e-8c", "ovs_interfaceid": "dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:37:55 compute-0 nova_compute[117413]: 2025-10-08 16:37:55.180 2 DEBUG nova.network.os_vif_util [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:50:c2:ac,bridge_name='br-int',has_traffic_filtering=True,id=dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2,network=Network(4c3c839c-22dd-4557-90e7-00c4261e25fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdbe9cf6e-8c') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:37:55 compute-0 nova_compute[117413]: 2025-10-08 16:37:55.181 2 DEBUG os_vif [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:c2:ac,bridge_name='br-int',has_traffic_filtering=True,id=dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2,network=Network(4c3c839c-22dd-4557-90e7-00c4261e25fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdbe9cf6e-8c') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:37:55 compute-0 nova_compute[117413]: 2025-10-08 16:37:55.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:55 compute-0 nova_compute[117413]: 2025-10-08 16:37:55.183 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdbe9cf6e-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:37:55 compute-0 nova_compute[117413]: 2025-10-08 16:37:55.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:55 compute-0 nova_compute[117413]: 2025-10-08 16:37:55.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:55 compute-0 nova_compute[117413]: 2025-10-08 16:37:55.187 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=425254c7-499a-4b55-bcfa-7c4b802b273d) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:37:55 compute-0 nova_compute[117413]: 2025-10-08 16:37:55.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:55 compute-0 nova_compute[117413]: 2025-10-08 16:37:55.190 2 INFO os_vif [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:c2:ac,bridge_name='br-int',has_traffic_filtering=True,id=dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2,network=Network(4c3c839c-22dd-4557-90e7-00c4261e25fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdbe9cf6e-8c')
Oct 08 16:37:55 compute-0 nova_compute[117413]: 2025-10-08 16:37:55.190 2 INFO nova.virt.libvirt.driver [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Deleting instance files /var/lib/nova/instances/dd8c9e9c-275f-4be0-b687-0cf9f1d6f061_del
Oct 08 16:37:55 compute-0 nova_compute[117413]: 2025-10-08 16:37:55.191 2 INFO nova.virt.libvirt.driver [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Deletion of /var/lib/nova/instances/dd8c9e9c-275f-4be0-b687-0cf9f1d6f061_del complete
Oct 08 16:37:55 compute-0 nova_compute[117413]: 2025-10-08 16:37:55.702 2 INFO nova.compute.manager [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Took 1.33 seconds to destroy the instance on the hypervisor.
Oct 08 16:37:55 compute-0 nova_compute[117413]: 2025-10-08 16:37:55.702 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:37:55 compute-0 nova_compute[117413]: 2025-10-08 16:37:55.703 2 DEBUG nova.compute.manager [-] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:37:55 compute-0 nova_compute[117413]: 2025-10-08 16:37:55.703 2 DEBUG nova.network.neutron [-] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:37:55 compute-0 nova_compute[117413]: 2025-10-08 16:37:55.703 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.205 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.607 2 DEBUG nova.compute.manager [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Received event network-vif-unplugged-dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.607 2 DEBUG oslo_concurrency.lockutils [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.608 2 DEBUG oslo_concurrency.lockutils [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.608 2 DEBUG oslo_concurrency.lockutils [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.608 2 DEBUG nova.compute.manager [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] No waiting events found dispatching network-vif-unplugged-dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.608 2 DEBUG nova.compute.manager [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Received event network-vif-unplugged-dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.609 2 DEBUG nova.compute.manager [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Received event network-vif-plugged-dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.609 2 DEBUG oslo_concurrency.lockutils [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.609 2 DEBUG oslo_concurrency.lockutils [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.609 2 DEBUG oslo_concurrency.lockutils [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.610 2 DEBUG nova.compute.manager [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] No waiting events found dispatching network-vif-plugged-dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.610 2 WARNING nova.compute.manager [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Received unexpected event network-vif-plugged-dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 for instance with vm_state active and task_state deleting.
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.610 2 DEBUG nova.compute.manager [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Received event network-vif-plugged-dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.610 2 DEBUG oslo_concurrency.lockutils [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.610 2 DEBUG oslo_concurrency.lockutils [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.611 2 DEBUG oslo_concurrency.lockutils [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.611 2 DEBUG nova.compute.manager [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] No waiting events found dispatching network-vif-plugged-dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.611 2 WARNING nova.compute.manager [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Received unexpected event network-vif-plugged-dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 for instance with vm_state active and task_state deleting.
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.611 2 DEBUG nova.compute.manager [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Received event network-vif-unplugged-dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.611 2 DEBUG oslo_concurrency.lockutils [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.611 2 DEBUG oslo_concurrency.lockutils [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.612 2 DEBUG oslo_concurrency.lockutils [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.612 2 DEBUG nova.compute.manager [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] No waiting events found dispatching network-vif-unplugged-dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.612 2 DEBUG nova.compute.manager [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Received event network-vif-unplugged-dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.612 2 DEBUG nova.compute.manager [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Received event network-vif-unplugged-dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.612 2 DEBUG oslo_concurrency.lockutils [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.612 2 DEBUG oslo_concurrency.lockutils [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.613 2 DEBUG oslo_concurrency.lockutils [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.613 2 DEBUG nova.compute.manager [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] No waiting events found dispatching network-vif-unplugged-dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.613 2 DEBUG nova.compute.manager [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Received event network-vif-unplugged-dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.613 2 DEBUG nova.compute.manager [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Received event network-vif-deleted-dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.613 2 INFO nova.compute.manager [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Neutron deleted interface dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2; detaching it from the instance and deleting it from the info cache
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.614 2 DEBUG nova.network.neutron [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:37:56 compute-0 nova_compute[117413]: 2025-10-08 16:37:56.963 2 DEBUG nova.network.neutron [-] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:37:57 compute-0 nova_compute[117413]: 2025-10-08 16:37:57.123 2 DEBUG nova.compute.manager [req-b35969a3-d836-4e05-ad9e-4c7ef4720540 req-d5c57213-1e0e-492a-a537-b4c189102d13 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Detach interface failed, port_id=dbe9cf6e-8cf3-4bb4-b732-e8562a39bbd2, reason: Instance dd8c9e9c-275f-4be0-b687-0cf9f1d6f061 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 08 16:37:57 compute-0 nova_compute[117413]: 2025-10-08 16:37:57.468 2 INFO nova.compute.manager [-] [instance: dd8c9e9c-275f-4be0-b687-0cf9f1d6f061] Took 1.77 seconds to deallocate network for instance.
Oct 08 16:37:57 compute-0 nova_compute[117413]: 2025-10-08 16:37:57.992 2 DEBUG oslo_concurrency.lockutils [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:37:57 compute-0 nova_compute[117413]: 2025-10-08 16:37:57.992 2 DEBUG oslo_concurrency.lockutils [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:37:57 compute-0 nova_compute[117413]: 2025-10-08 16:37:57.998 2 DEBUG oslo_concurrency.lockutils [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:37:58 compute-0 nova_compute[117413]: 2025-10-08 16:37:58.038 2 INFO nova.scheduler.client.report [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Deleted allocations for instance dd8c9e9c-275f-4be0-b687-0cf9f1d6f061
Oct 08 16:37:59 compute-0 nova_compute[117413]: 2025-10-08 16:37:59.073 2 DEBUG oslo_concurrency.lockutils [None req-33b3955e-d9f7-4d48-96d2-88e36b2d293b 4a25bc3a607548039426bbcfe6b35524 096bcdb2ee9d4587b808e167326cbd88 - - default default] Lock "dd8c9e9c-275f-4be0-b687-0cf9f1d6f061" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.233s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:37:59 compute-0 podman[127881]: time="2025-10-08T16:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:37:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:37:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3033 "" "Go-http-client/1.1"
Oct 08 16:38:00 compute-0 nova_compute[117413]: 2025-10-08 16:38:00.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:01 compute-0 openstack_network_exporter[130039]: ERROR   16:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:38:01 compute-0 openstack_network_exporter[130039]: ERROR   16:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:38:01 compute-0 openstack_network_exporter[130039]: ERROR   16:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:38:01 compute-0 openstack_network_exporter[130039]: ERROR   16:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:38:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:38:01 compute-0 openstack_network_exporter[130039]: ERROR   16:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:38:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:38:01 compute-0 podman[151132]: 2025-10-08 16:38:01.469028716 +0000 UTC m=+0.074016298 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_managed=true, container_name=multipathd)
Oct 08 16:38:01 compute-0 nova_compute[117413]: 2025-10-08 16:38:01.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:02 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:38:02.297 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:38:02 compute-0 nova_compute[117413]: 2025-10-08 16:38:02.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:02 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:38:02.298 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:38:02 compute-0 nova_compute[117413]: 2025-10-08 16:38:02.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:05 compute-0 nova_compute[117413]: 2025-10-08 16:38:05.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:06 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:38:06.300 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:38:06 compute-0 nova_compute[117413]: 2025-10-08 16:38:06.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:07 compute-0 podman[151153]: 2025-10-08 16:38:07.498485969 +0000 UTC m=+0.098341934 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, release=1755695350, io.openshift.tags=minimal rhel9, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 08 16:38:10 compute-0 nova_compute[117413]: 2025-10-08 16:38:10.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:11 compute-0 nova_compute[117413]: 2025-10-08 16:38:11.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:38:11 compute-0 nova_compute[117413]: 2025-10-08 16:38:11.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:11 compute-0 nova_compute[117413]: 2025-10-08 16:38:11.881 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:38:11 compute-0 nova_compute[117413]: 2025-10-08 16:38:11.882 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:38:11 compute-0 nova_compute[117413]: 2025-10-08 16:38:11.882 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:38:11 compute-0 nova_compute[117413]: 2025-10-08 16:38:11.882 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:38:12 compute-0 nova_compute[117413]: 2025-10-08 16:38:12.071 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:38:12 compute-0 nova_compute[117413]: 2025-10-08 16:38:12.072 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:38:12 compute-0 nova_compute[117413]: 2025-10-08 16:38:12.106 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.034s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:38:12 compute-0 nova_compute[117413]: 2025-10-08 16:38:12.107 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6159MB free_disk=73.25061798095703GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:38:12 compute-0 nova_compute[117413]: 2025-10-08 16:38:12.107 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:38:12 compute-0 nova_compute[117413]: 2025-10-08 16:38:12.107 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:38:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:38:12.258 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:54:5d 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2742327d-2338-460e-952e-6446bba2b03f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a424ccdfcc0c4fc2ac4c67ed7d4c2afe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d14e964-e6b6-456f-bb61-aa39c15f6e3f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8a2f4b59-05b7-414e-b353-f44ee56d820a) old=Port_Binding(mac=['fa:16:3e:83:54:5d'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2742327d-2338-460e-952e-6446bba2b03f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a424ccdfcc0c4fc2ac4c67ed7d4c2afe', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:38:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:38:12.259 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8a2f4b59-05b7-414e-b353-f44ee56d820a in datapath 2742327d-2338-460e-952e-6446bba2b03f updated
Oct 08 16:38:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:38:12.260 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2742327d-2338-460e-952e-6446bba2b03f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:38:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:38:12.261 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[1e2bfba1-04d5-469e-a985-8c339cbcfb5a]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:38:13 compute-0 nova_compute[117413]: 2025-10-08 16:38:13.166 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:38:13 compute-0 nova_compute[117413]: 2025-10-08 16:38:13.167 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:38:12 up 46 min,  0 user,  load average: 0.05, 0.08, 0.17\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:38:13 compute-0 nova_compute[117413]: 2025-10-08 16:38:13.188 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:38:13 compute-0 podman[151177]: 2025-10-08 16:38:13.449708762 +0000 UTC m=+0.061183766 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 08 16:38:13 compute-0 nova_compute[117413]: 2025-10-08 16:38:13.696 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:38:14 compute-0 nova_compute[117413]: 2025-10-08 16:38:14.210 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:38:14 compute-0 nova_compute[117413]: 2025-10-08 16:38:14.211 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.103s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:38:15 compute-0 nova_compute[117413]: 2025-10-08 16:38:15.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:15 compute-0 nova_compute[117413]: 2025-10-08 16:38:15.206 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:38:15 compute-0 nova_compute[117413]: 2025-10-08 16:38:15.206 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:38:15 compute-0 nova_compute[117413]: 2025-10-08 16:38:15.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:38:15 compute-0 nova_compute[117413]: 2025-10-08 16:38:15.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:38:15 compute-0 nova_compute[117413]: 2025-10-08 16:38:15.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:38:15 compute-0 podman[151198]: 2025-10-08 16:38:15.438829415 +0000 UTC m=+0.047216651 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Oct 08 16:38:16 compute-0 nova_compute[117413]: 2025-10-08 16:38:16.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:38:16 compute-0 nova_compute[117413]: 2025-10-08 16:38:16.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:18 compute-0 nova_compute[117413]: 2025-10-08 16:38:18.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:38:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:38:18.481 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:31:b8 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-9e2d840e-4d2d-4054-acee-73a609d3b422', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e2d840e-4d2d-4054-acee-73a609d3b422', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a137143db2f84a9f89a9bfc5d20558d0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7f9833a3-8d3f-4c41-a95b-7634ee9ae296, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=90544c34-a8fa-41b7-8d34-0ce7703e6f57) old=Port_Binding(mac=['fa:16:3e:c1:31:b8'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-9e2d840e-4d2d-4054-acee-73a609d3b422', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e2d840e-4d2d-4054-acee-73a609d3b422', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a137143db2f84a9f89a9bfc5d20558d0', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:38:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:38:18.482 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 90544c34-a8fa-41b7-8d34-0ce7703e6f57 in datapath 9e2d840e-4d2d-4054-acee-73a609d3b422 updated
Oct 08 16:38:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:38:18.483 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9e2d840e-4d2d-4054-acee-73a609d3b422, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:38:18 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:38:18.484 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[9778c01f-b824-46e5-813f-0a8280e145b4]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:38:20 compute-0 nova_compute[117413]: 2025-10-08 16:38:20.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:20 compute-0 nova_compute[117413]: 2025-10-08 16:38:20.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:38:20 compute-0 podman[151217]: 2025-10-08 16:38:20.445773139 +0000 UTC m=+0.053481792 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:38:20 compute-0 podman[151218]: 2025-10-08 16:38:20.492609198 +0000 UTC m=+0.097558981 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 08 16:38:21 compute-0 nova_compute[117413]: 2025-10-08 16:38:21.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:25 compute-0 nova_compute[117413]: 2025-10-08 16:38:25.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:26 compute-0 nova_compute[117413]: 2025-10-08 16:38:26.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:29 compute-0 nova_compute[117413]: 2025-10-08 16:38:29.358 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:38:29 compute-0 podman[127881]: time="2025-10-08T16:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:38:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:38:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3030 "" "Go-http-client/1.1"
Oct 08 16:38:30 compute-0 nova_compute[117413]: 2025-10-08 16:38:30.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:31 compute-0 openstack_network_exporter[130039]: ERROR   16:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:38:31 compute-0 openstack_network_exporter[130039]: ERROR   16:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:38:31 compute-0 openstack_network_exporter[130039]: ERROR   16:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:38:31 compute-0 openstack_network_exporter[130039]: ERROR   16:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:38:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:38:31 compute-0 openstack_network_exporter[130039]: ERROR   16:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:38:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:38:31 compute-0 nova_compute[117413]: 2025-10-08 16:38:31.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:32 compute-0 unix_chkpwd[151269]: password check failed for user (root)
Oct 08 16:38:32 compute-0 sshd-session[151267]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 08 16:38:32 compute-0 podman[151270]: 2025-10-08 16:38:32.476616658 +0000 UTC m=+0.079243209 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 16:38:33 compute-0 sshd-session[151267]: Failed password for root from 91.224.92.108 port 52956 ssh2
Oct 08 16:38:34 compute-0 unix_chkpwd[151290]: password check failed for user (root)
Oct 08 16:38:35 compute-0 nova_compute[117413]: 2025-10-08 16:38:35.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:35 compute-0 sshd-session[151267]: Failed password for root from 91.224.92.108 port 52956 ssh2
Oct 08 16:38:36 compute-0 unix_chkpwd[151291]: password check failed for user (root)
Oct 08 16:38:36 compute-0 nova_compute[117413]: 2025-10-08 16:38:36.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:36 compute-0 ovn_controller[19768]: 2025-10-08T16:38:36Z|00221|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Oct 08 16:38:38 compute-0 sshd-session[151267]: Failed password for root from 91.224.92.108 port 52956 ssh2
Oct 08 16:38:38 compute-0 podman[151292]: 2025-10-08 16:38:38.464747853 +0000 UTC m=+0.073049900 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, build-date=2025-08-20T13:12:41, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, maintainer=Red Hat, Inc.)
Oct 08 16:38:40 compute-0 sshd-session[151267]: Received disconnect from 91.224.92.108 port 52956:11:  [preauth]
Oct 08 16:38:40 compute-0 sshd-session[151267]: Disconnected from authenticating user root 91.224.92.108 port 52956 [preauth]
Oct 08 16:38:40 compute-0 sshd-session[151267]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 08 16:38:40 compute-0 nova_compute[117413]: 2025-10-08 16:38:40.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:40 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:38:40.451 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:38:40 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:38:40.452 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:38:40 compute-0 nova_compute[117413]: 2025-10-08 16:38:40.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:40 compute-0 unix_chkpwd[151317]: password check failed for user (root)
Oct 08 16:38:40 compute-0 sshd-session[151314]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 08 16:38:41 compute-0 nova_compute[117413]: 2025-10-08 16:38:41.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:38:41.928 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:38:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:38:41.929 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:38:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:38:41.929 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:38:43 compute-0 sshd-session[151314]: Failed password for root from 91.224.92.108 port 32254 ssh2
Oct 08 16:38:44 compute-0 podman[151319]: 2025-10-08 16:38:44.468822338 +0000 UTC m=+0.075043238 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:38:44 compute-0 unix_chkpwd[151339]: password check failed for user (root)
Oct 08 16:38:45 compute-0 nova_compute[117413]: 2025-10-08 16:38:45.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:46 compute-0 podman[151340]: 2025-10-08 16:38:46.460148526 +0000 UTC m=+0.058681653 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 08 16:38:46 compute-0 sshd-session[151314]: Failed password for root from 91.224.92.108 port 32254 ssh2
Oct 08 16:38:46 compute-0 nova_compute[117413]: 2025-10-08 16:38:46.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:48 compute-0 unix_chkpwd[151360]: password check failed for user (root)
Oct 08 16:38:50 compute-0 nova_compute[117413]: 2025-10-08 16:38:50.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:50 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:38:50.454 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:38:51 compute-0 sshd-session[151314]: Failed password for root from 91.224.92.108 port 32254 ssh2
Oct 08 16:38:51 compute-0 podman[151361]: 2025-10-08 16:38:51.471904459 +0000 UTC m=+0.070712132 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:38:51 compute-0 podman[151362]: 2025-10-08 16:38:51.549983194 +0000 UTC m=+0.140737763 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251007)
Oct 08 16:38:51 compute-0 nova_compute[117413]: 2025-10-08 16:38:51.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:52 compute-0 sshd-session[151314]: Received disconnect from 91.224.92.108 port 32254:11:  [preauth]
Oct 08 16:38:52 compute-0 sshd-session[151314]: Disconnected from authenticating user root 91.224.92.108 port 32254 [preauth]
Oct 08 16:38:52 compute-0 sshd-session[151314]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 08 16:38:53 compute-0 unix_chkpwd[151413]: password check failed for user (root)
Oct 08 16:38:53 compute-0 sshd-session[151411]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 08 16:38:55 compute-0 nova_compute[117413]: 2025-10-08 16:38:55.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:55 compute-0 sshd-session[151411]: Failed password for root from 91.224.92.108 port 56390 ssh2
Oct 08 16:38:56 compute-0 nova_compute[117413]: 2025-10-08 16:38:56.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:38:57 compute-0 unix_chkpwd[151414]: password check failed for user (root)
Oct 08 16:38:58 compute-0 sshd-session[151411]: Failed password for root from 91.224.92.108 port 56390 ssh2
Oct 08 16:38:59 compute-0 unix_chkpwd[151415]: password check failed for user (root)
Oct 08 16:38:59 compute-0 podman[127881]: time="2025-10-08T16:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:38:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:38:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3031 "" "Go-http-client/1.1"
Oct 08 16:39:00 compute-0 nova_compute[117413]: 2025-10-08 16:39:00.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:01 compute-0 sshd-session[151411]: Failed password for root from 91.224.92.108 port 56390 ssh2
Oct 08 16:39:01 compute-0 openstack_network_exporter[130039]: ERROR   16:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:39:01 compute-0 openstack_network_exporter[130039]: ERROR   16:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:39:01 compute-0 openstack_network_exporter[130039]: ERROR   16:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:39:01 compute-0 openstack_network_exporter[130039]: ERROR   16:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:39:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:39:01 compute-0 openstack_network_exporter[130039]: ERROR   16:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:39:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:39:01 compute-0 nova_compute[117413]: 2025-10-08 16:39:01.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:02 compute-0 sshd-session[151411]: Received disconnect from 91.224.92.108 port 56390:11:  [preauth]
Oct 08 16:39:02 compute-0 sshd-session[151411]: Disconnected from authenticating user root 91.224.92.108 port 56390 [preauth]
Oct 08 16:39:02 compute-0 sshd-session[151411]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=91.224.92.108  user=root
Oct 08 16:39:03 compute-0 podman[151417]: 2025-10-08 16:39:03.509195826 +0000 UTC m=+0.105020987 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 08 16:39:05 compute-0 nova_compute[117413]: 2025-10-08 16:39:05.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:06 compute-0 nova_compute[117413]: 2025-10-08 16:39:06.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:09 compute-0 podman[151437]: 2025-10-08 16:39:09.447176375 +0000 UTC m=+0.060209537 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, version=9.6)
Oct 08 16:39:10 compute-0 nova_compute[117413]: 2025-10-08 16:39:10.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:11 compute-0 nova_compute[117413]: 2025-10-08 16:39:11.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:39:11 compute-0 nova_compute[117413]: 2025-10-08 16:39:11.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:11 compute-0 nova_compute[117413]: 2025-10-08 16:39:11.881 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:39:11 compute-0 nova_compute[117413]: 2025-10-08 16:39:11.881 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:39:11 compute-0 nova_compute[117413]: 2025-10-08 16:39:11.881 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:39:11 compute-0 nova_compute[117413]: 2025-10-08 16:39:11.882 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:39:12 compute-0 nova_compute[117413]: 2025-10-08 16:39:12.054 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:39:12 compute-0 nova_compute[117413]: 2025-10-08 16:39:12.055 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:39:12 compute-0 nova_compute[117413]: 2025-10-08 16:39:12.083 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.028s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:39:12 compute-0 nova_compute[117413]: 2025-10-08 16:39:12.084 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6161MB free_disk=73.25053405761719GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:39:12 compute-0 nova_compute[117413]: 2025-10-08 16:39:12.084 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:39:12 compute-0 nova_compute[117413]: 2025-10-08 16:39:12.084 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:39:13 compute-0 nova_compute[117413]: 2025-10-08 16:39:13.140 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:39:13 compute-0 nova_compute[117413]: 2025-10-08 16:39:13.141 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:39:12 up 47 min,  0 user,  load average: 0.02, 0.07, 0.15\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:39:13 compute-0 nova_compute[117413]: 2025-10-08 16:39:13.168 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:39:13 compute-0 nova_compute[117413]: 2025-10-08 16:39:13.678 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:39:14 compute-0 nova_compute[117413]: 2025-10-08 16:39:14.190 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:39:14 compute-0 nova_compute[117413]: 2025-10-08 16:39:14.191 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.107s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:39:15 compute-0 nova_compute[117413]: 2025-10-08 16:39:15.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:15 compute-0 podman[151459]: 2025-10-08 16:39:15.464047392 +0000 UTC m=+0.063957206 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007)
Oct 08 16:39:16 compute-0 nova_compute[117413]: 2025-10-08 16:39:16.191 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:39:16 compute-0 nova_compute[117413]: 2025-10-08 16:39:16.192 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:39:16 compute-0 nova_compute[117413]: 2025-10-08 16:39:16.192 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:39:16 compute-0 nova_compute[117413]: 2025-10-08 16:39:16.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:39:16 compute-0 nova_compute[117413]: 2025-10-08 16:39:16.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:39:16 compute-0 nova_compute[117413]: 2025-10-08 16:39:16.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:17 compute-0 podman[151479]: 2025-10-08 16:39:17.448730259 +0000 UTC m=+0.057754237 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:39:18 compute-0 nova_compute[117413]: 2025-10-08 16:39:18.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:39:20 compute-0 nova_compute[117413]: 2025-10-08 16:39:20.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:20 compute-0 nova_compute[117413]: 2025-10-08 16:39:20.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:39:21 compute-0 nova_compute[117413]: 2025-10-08 16:39:21.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:22 compute-0 nova_compute[117413]: 2025-10-08 16:39:22.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:39:22 compute-0 podman[151500]: 2025-10-08 16:39:22.444970302 +0000 UTC m=+0.055915893 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:39:22 compute-0 podman[151501]: 2025-10-08 16:39:22.48974799 +0000 UTC m=+0.094103450 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251007)
Oct 08 16:39:25 compute-0 nova_compute[117413]: 2025-10-08 16:39:25.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:26 compute-0 nova_compute[117413]: 2025-10-08 16:39:26.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:28 compute-0 nova_compute[117413]: 2025-10-08 16:39:28.691 2 DEBUG nova.virt.libvirt.driver [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Creating tmpfile /var/lib/nova/instances/tmpwxb5j9zg to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10944
Oct 08 16:39:28 compute-0 nova_compute[117413]: 2025-10-08 16:39:28.692 2 WARNING neutronclient.v2_0.client [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:39:28 compute-0 nova_compute[117413]: 2025-10-08 16:39:28.696 2 DEBUG nova.compute.manager [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwxb5j9zg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.12/site-packages/nova/compute/manager.py:9086
Oct 08 16:39:28 compute-0 nova_compute[117413]: 2025-10-08 16:39:28.726 2 DEBUG nova.virt.libvirt.driver [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Creating tmpfile /var/lib/nova/instances/tmp6lsr992a to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10944
Oct 08 16:39:28 compute-0 nova_compute[117413]: 2025-10-08 16:39:28.727 2 WARNING neutronclient.v2_0.client [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:39:28 compute-0 nova_compute[117413]: 2025-10-08 16:39:28.731 2 DEBUG nova.compute.manager [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp6lsr992a',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.12/site-packages/nova/compute/manager.py:9086
Oct 08 16:39:29 compute-0 podman[127881]: time="2025-10-08T16:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:39:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:39:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3029 "" "Go-http-client/1.1"
Oct 08 16:39:30 compute-0 nova_compute[117413]: 2025-10-08 16:39:30.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:30 compute-0 nova_compute[117413]: 2025-10-08 16:39:30.743 2 WARNING neutronclient.v2_0.client [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:39:30 compute-0 nova_compute[117413]: 2025-10-08 16:39:30.762 2 WARNING neutronclient.v2_0.client [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:39:31 compute-0 openstack_network_exporter[130039]: ERROR   16:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:39:31 compute-0 openstack_network_exporter[130039]: ERROR   16:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:39:31 compute-0 openstack_network_exporter[130039]: ERROR   16:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:39:31 compute-0 openstack_network_exporter[130039]: ERROR   16:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:39:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:39:31 compute-0 openstack_network_exporter[130039]: ERROR   16:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:39:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:39:31 compute-0 nova_compute[117413]: 2025-10-08 16:39:31.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:34 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 08 16:39:34 compute-0 podman[151550]: 2025-10-08 16:39:34.407575432 +0000 UTC m=+0.066855260 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 08 16:39:34 compute-0 nova_compute[117413]: 2025-10-08 16:39:34.610 2 DEBUG nova.compute.manager [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwxb5j9zg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9311
Oct 08 16:39:35 compute-0 nova_compute[117413]: 2025-10-08 16:39:35.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:35 compute-0 nova_compute[117413]: 2025-10-08 16:39:35.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:39:35 compute-0 nova_compute[117413]: 2025-10-08 16:39:35.626 2 DEBUG oslo_concurrency.lockutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:39:35 compute-0 nova_compute[117413]: 2025-10-08 16:39:35.627 2 DEBUG oslo_concurrency.lockutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:39:35 compute-0 nova_compute[117413]: 2025-10-08 16:39:35.628 2 DEBUG nova.network.neutron [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:39:36 compute-0 nova_compute[117413]: 2025-10-08 16:39:36.136 2 WARNING neutronclient.v2_0.client [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:39:36 compute-0 nova_compute[117413]: 2025-10-08 16:39:36.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:36 compute-0 nova_compute[117413]: 2025-10-08 16:39:36.890 2 WARNING neutronclient.v2_0.client [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:39:37 compute-0 nova_compute[117413]: 2025-10-08 16:39:37.050 2 DEBUG nova.network.neutron [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Updating instance_info_cache with network_info: [{"id": "037479b5-3904-4ab8-b4c7-8ce540406327", "address": "fa:16:3e:ec:97:a5", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037479b5-39", "ovs_interfaceid": "037479b5-3904-4ab8-b4c7-8ce540406327", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:39:37 compute-0 nova_compute[117413]: 2025-10-08 16:39:37.557 2 DEBUG oslo_concurrency.lockutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:39:37 compute-0 nova_compute[117413]: 2025-10-08 16:39:37.572 2 DEBUG nova.virt.libvirt.driver [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwxb5j9zg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11737
Oct 08 16:39:37 compute-0 nova_compute[117413]: 2025-10-08 16:39:37.573 2 DEBUG nova.virt.libvirt.driver [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Creating instance directory: /var/lib/nova/instances/bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21 pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11750
Oct 08 16:39:37 compute-0 nova_compute[117413]: 2025-10-08 16:39:37.573 2 DEBUG nova.virt.libvirt.driver [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Creating disk.info with the contents: {'/var/lib/nova/instances/bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21/disk': 'qcow2', '/var/lib/nova/instances/bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21/disk.config': 'raw'} pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11764
Oct 08 16:39:37 compute-0 nova_compute[117413]: 2025-10-08 16:39:37.573 2 DEBUG nova.virt.libvirt.driver [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11774
Oct 08 16:39:37 compute-0 nova_compute[117413]: 2025-10-08 16:39:37.574 2 DEBUG nova.objects.instance [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'trusted_certs' on Instance uuid bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:39:37 compute-0 nova_compute[117413]: 2025-10-08 16:39:37.868 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:39:37 compute-0 nova_compute[117413]: 2025-10-08 16:39:37.869 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.082 2 DEBUG oslo_utils.imageutils.format_inspector [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.085 2 DEBUG oslo_utils.imageutils.format_inspector [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.087 2 DEBUG oslo_concurrency.processutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.149 2 DEBUG oslo_concurrency.processutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.150 2 DEBUG oslo_concurrency.lockutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.151 2 DEBUG oslo_concurrency.lockutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.151 2 DEBUG oslo_utils.imageutils.format_inspector [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.154 2 DEBUG oslo_utils.imageutils.format_inspector [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.154 2 DEBUG oslo_concurrency.processutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.209 2 DEBUG oslo_concurrency.processutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.210 2 DEBUG oslo_concurrency.processutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.300 2 DEBUG oslo_concurrency.processutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21/disk 1073741824" returned: 0 in 0.090s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.301 2 DEBUG oslo_concurrency.lockutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.151s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.302 2 DEBUG oslo_concurrency.processutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.368 2 DEBUG oslo_concurrency.processutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.369 2 DEBUG nova.virt.disk.api [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Checking if we can resize image /var/lib/nova/instances/bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.369 2 DEBUG oslo_concurrency.processutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.377 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.426 2 DEBUG oslo_concurrency.processutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.427 2 DEBUG nova.virt.disk.api [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Cannot resize image /var/lib/nova/instances/bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.427 2 DEBUG nova.objects.instance [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'migration_context' on Instance uuid bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.936 2 DEBUG nova.objects.base [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Object Instance<bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21> lazy-loaded attributes: trusted_certs,migration_context wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.937 2 DEBUG oslo_concurrency.processutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21/disk.config 497664 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.980 2 DEBUG oslo_concurrency.processutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21/disk.config 497664" returned: 0 in 0.043s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.981 2 DEBUG nova.virt.libvirt.driver [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11704
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.983 2 DEBUG nova.virt.libvirt.vif [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-08T16:38:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-739822020',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-7398220',id=27,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:39:00Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a137143db2f84a9f89a9bfc5d20558d0',ramdisk_id='',reservation_id='r-20y21oyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-817748354',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-817748354-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:39:00Z,user_data=None,user_id='aa1d2ca0056143d982599cc1b9f8587d',uuid=bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "037479b5-3904-4ab8-b4c7-8ce540406327", "address": "fa:16:3e:ec:97:a5", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap037479b5-39", "ovs_interfaceid": "037479b5-3904-4ab8-b4c7-8ce540406327", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.983 2 DEBUG nova.network.os_vif_util [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converting VIF {"id": "037479b5-3904-4ab8-b4c7-8ce540406327", "address": "fa:16:3e:ec:97:a5", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap037479b5-39", "ovs_interfaceid": "037479b5-3904-4ab8-b4c7-8ce540406327", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.984 2 DEBUG nova.network.os_vif_util [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:97:a5,bridge_name='br-int',has_traffic_filtering=True,id=037479b5-3904-4ab8-b4c7-8ce540406327,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037479b5-39') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.985 2 DEBUG os_vif [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:97:a5,bridge_name='br-int',has_traffic_filtering=True,id=037479b5-3904-4ab8-b4c7-8ce540406327,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037479b5-39') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.986 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.986 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.987 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '19a4d520-9eae-526c-a934-46c81c6aad48', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.992 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap037479b5-39, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.992 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap037479b5-39, col_values=(('qos', UUID('fb351db6-b568-4f0f-9bb7-460980e79604')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.993 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap037479b5-39, col_values=(('external_ids', {'iface-id': '037479b5-3904-4ab8-b4c7-8ce540406327', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:97:a5', 'vm-uuid': 'bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:38 compute-0 NetworkManager[1034]: <info>  [1759941578.9948] manager: (tap037479b5-39): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Oct 08 16:39:38 compute-0 nova_compute[117413]: 2025-10-08 16:39:38.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:39:39 compute-0 nova_compute[117413]: 2025-10-08 16:39:39.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:39 compute-0 nova_compute[117413]: 2025-10-08 16:39:39.001 2 INFO os_vif [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:97:a5,bridge_name='br-int',has_traffic_filtering=True,id=037479b5-3904-4ab8-b4c7-8ce540406327,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037479b5-39')
Oct 08 16:39:39 compute-0 nova_compute[117413]: 2025-10-08 16:39:39.001 2 DEBUG nova.virt.libvirt.driver [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11851
Oct 08 16:39:39 compute-0 nova_compute[117413]: 2025-10-08 16:39:39.002 2 DEBUG nova.compute.manager [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwxb5j9zg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9377
Oct 08 16:39:39 compute-0 nova_compute[117413]: 2025-10-08 16:39:39.002 2 WARNING neutronclient.v2_0.client [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:39:39 compute-0 nova_compute[117413]: 2025-10-08 16:39:39.083 2 WARNING neutronclient.v2_0.client [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:39:39 compute-0 nova_compute[117413]: 2025-10-08 16:39:39.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:39 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:39.333 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:39:39 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:39.334 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:39:39 compute-0 nova_compute[117413]: 2025-10-08 16:39:39.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:39:39 compute-0 nova_compute[117413]: 2025-10-08 16:39:39.363 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Oct 08 16:39:39 compute-0 nova_compute[117413]: 2025-10-08 16:39:39.786 2 DEBUG nova.network.neutron [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Port 037479b5-3904-4ab8-b4c7-8ce540406327 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.12/site-packages/nova/network/neutron.py:356
Oct 08 16:39:39 compute-0 nova_compute[117413]: 2025-10-08 16:39:39.797 2 DEBUG nova.compute.manager [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwxb5j9zg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9443
Oct 08 16:39:40 compute-0 podman[151591]: 2025-10-08 16:39:40.447528247 +0000 UTC m=+0.059136296 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.openshift.tags=minimal rhel9, version=9.6, config_id=edpm)
Oct 08 16:39:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:41.336 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:39:41 compute-0 nova_compute[117413]: 2025-10-08 16:39:41.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:41.930 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:39:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:41.930 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:39:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:41.930 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:39:42 compute-0 systemd[1]: Starting libvirt proxy daemon...
Oct 08 16:39:43 compute-0 systemd[1]: Started libvirt proxy daemon.
Oct 08 16:39:43 compute-0 kernel: tap037479b5-39: entered promiscuous mode
Oct 08 16:39:43 compute-0 NetworkManager[1034]: <info>  [1759941583.1919] manager: (tap037479b5-39): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Oct 08 16:39:43 compute-0 nova_compute[117413]: 2025-10-08 16:39:43.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:43 compute-0 ovn_controller[19768]: 2025-10-08T16:39:43Z|00222|binding|INFO|Claiming lport 037479b5-3904-4ab8-b4c7-8ce540406327 for this additional chassis.
Oct 08 16:39:43 compute-0 ovn_controller[19768]: 2025-10-08T16:39:43Z|00223|binding|INFO|037479b5-3904-4ab8-b4c7-8ce540406327: Claiming fa:16:3e:ec:97:a5 10.100.0.13
Oct 08 16:39:43 compute-0 nova_compute[117413]: 2025-10-08 16:39:43.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:43 compute-0 nova_compute[117413]: 2025-10-08 16:39:43.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.213 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:97:a5 10.100.0.13'], port_security=['fa:16:3e:ec:97:a5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2742327d-2338-460e-952e-6446bba2b03f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a137143db2f84a9f89a9bfc5d20558d0', 'neutron:revision_number': '10', 'neutron:security_group_ids': '90e43186-0cc8-407e-ae2a-ee1daa701739', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d14e964-e6b6-456f-bb61-aa39c15f6e3f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=037479b5-3904-4ab8-b4c7-8ce540406327) old=Port_Binding(additional_chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.214 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 037479b5-3904-4ab8-b4c7-8ce540406327 in datapath 2742327d-2338-460e-952e-6446bba2b03f unbound from our chassis
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.215 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2742327d-2338-460e-952e-6446bba2b03f
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.234 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[4fe93408-7876-4f59-a312-6304c3e5a1ca]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.235 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2742327d-21 in ovnmeta-2742327d-2338-460e-952e-6446bba2b03f namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.237 139805 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2742327d-20 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.237 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b53571-6fab-4359-bdb3-e96169a71893]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.238 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[501d36b8-b1d6-47ab-af34-528c77769ff8]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:39:43 compute-0 systemd-machined[77548]: New machine qemu-19-instance-0000001b.
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.257 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[7298ed93-de7e-4925-87b6-df2b037e41c4]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:39:43 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-0000001b.
Oct 08 16:39:43 compute-0 nova_compute[117413]: 2025-10-08 16:39:43.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:43 compute-0 ovn_controller[19768]: 2025-10-08T16:39:43Z|00224|binding|INFO|Setting lport 037479b5-3904-4ab8-b4c7-8ce540406327 ovn-installed in OVS
Oct 08 16:39:43 compute-0 nova_compute[117413]: 2025-10-08 16:39:43.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:43 compute-0 nova_compute[117413]: 2025-10-08 16:39:43.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.285 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[67fc9b34-06ea-4c75-ad6b-dcf4ed8e0373]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:39:43 compute-0 systemd-udevd[151648]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:39:43 compute-0 NetworkManager[1034]: <info>  [1759941583.3138] device (tap037479b5-39): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:39:43 compute-0 NetworkManager[1034]: <info>  [1759941583.3154] device (tap037479b5-39): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.335 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[a6199215-6449-47d2-8cbf-f16caa1f8b6b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.340 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[d8a394d2-005f-45c6-b976-fac9d90f21dd]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:39:43 compute-0 NetworkManager[1034]: <info>  [1759941583.3422] manager: (tap2742327d-20): new Veth device (/org/freedesktop/NetworkManager/Devices/82)
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.379 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[92d45fd8-87f9-4fb0-b4d0-c290ebea8af0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.382 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7c87a6-94b2-4768-b96d-fd8918698f5e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:39:43 compute-0 NetworkManager[1034]: <info>  [1759941583.4182] device (tap2742327d-20): carrier: link connected
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.423 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[f721cd43-98b0-4fb4-97dc-077b32e5933b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.441 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf96faf-0868-4c20-a96e-b4bf9942f099]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2742327d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:54:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 287210, 'reachable_time': 39805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 151678, 'error': None, 'target': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.467 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d928b1-6d2f-4dfc-afba-93f7d153bc56]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe83:545d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 287210, 'tstamp': 287210}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 151679, 'error': None, 'target': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.492 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[0a0c6934-6f9e-40c2-a5ee-662d51051439]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2742327d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:54:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 287210, 'reachable_time': 39805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 151680, 'error': None, 'target': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.536 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[761d4aef-2dda-47c7-a1a7-bcecbca5a390]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.621 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[73368812-596b-400a-b64d-3cb5e768c826]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.623 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2742327d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.623 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.624 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2742327d-20, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:39:43 compute-0 nova_compute[117413]: 2025-10-08 16:39:43.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:43 compute-0 NetworkManager[1034]: <info>  [1759941583.6268] manager: (tap2742327d-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Oct 08 16:39:43 compute-0 kernel: tap2742327d-20: entered promiscuous mode
Oct 08 16:39:43 compute-0 nova_compute[117413]: 2025-10-08 16:39:43.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.630 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2742327d-20, col_values=(('external_ids', {'iface-id': '8a2f4b59-05b7-414e-b353-f44ee56d820a'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:39:43 compute-0 nova_compute[117413]: 2025-10-08 16:39:43.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:43 compute-0 ovn_controller[19768]: 2025-10-08T16:39:43Z|00225|binding|INFO|Releasing lport 8a2f4b59-05b7-414e-b353-f44ee56d820a from this chassis (sb_readonly=0)
Oct 08 16:39:43 compute-0 nova_compute[117413]: 2025-10-08 16:39:43.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.635 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ca3190-c1ea-418c-850f-0294d834d57a]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.636 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.636 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.636 28633 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 2742327d-2338-460e-952e-6446bba2b03f disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.636 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.636 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[f2be8f1c-ed0e-4790-9356-4c36fde76031]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.637 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.637 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[b9026108-8eae-44f4-9dc0-de61173f294c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.637 28633 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: global
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     log         /dev/log local0 debug
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     log-tag     haproxy-metadata-proxy-2742327d-2338-460e-952e-6446bba2b03f
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     user        root
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     group       root
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     maxconn     1024
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     pidfile     /var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     daemon
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: defaults
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     log global
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     mode http
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     option httplog
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     option dontlognull
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     option http-server-close
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     option forwardfor
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     retries                 3
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     timeout http-request    30s
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     timeout connect         30s
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     timeout client          32s
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     timeout server          32s
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     timeout http-keep-alive 30s
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: listen listener
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     bind 169.254.169.254:80
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     server metadata /var/lib/neutron/metadata_proxy
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:     http-request add-header X-OVN-Network-ID 2742327d-2338-460e-952e-6446bba2b03f
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 08 16:39:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:39:43.638 28633 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'env', 'PROCESS_TAG=haproxy-2742327d-2338-460e-952e-6446bba2b03f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2742327d-2338-460e-952e-6446bba2b03f.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 08 16:39:43 compute-0 nova_compute[117413]: 2025-10-08 16:39:43.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:43 compute-0 nova_compute[117413]: 2025-10-08 16:39:43.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:44 compute-0 podman[151717]: 2025-10-08 16:39:44.009609733 +0000 UTC m=+0.026501640 image pull 1b705be0a2473f9551d4f3571c1e8fc1b0bd84e013684239de53078e70a4b6e3 38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 08 16:39:44 compute-0 podman[151717]: 2025-10-08 16:39:44.177956356 +0000 UTC m=+0.194848243 container create 1b7c1db85494b784c01398008fc48864ac8642229ba347020f85b1db5b6540da (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 08 16:39:44 compute-0 systemd[1]: Started libpod-conmon-1b7c1db85494b784c01398008fc48864ac8642229ba347020f85b1db5b6540da.scope.
Oct 08 16:39:44 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:39:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7d51457f4d5d37f7805b6595cdc882595939fa00e58ca76381279713de8ba9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 08 16:39:44 compute-0 podman[151717]: 2025-10-08 16:39:44.285264738 +0000 UTC m=+0.302156645 container init 1b7c1db85494b784c01398008fc48864ac8642229ba347020f85b1db5b6540da (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, tcib_managed=true)
Oct 08 16:39:44 compute-0 podman[151717]: 2025-10-08 16:39:44.298822381 +0000 UTC m=+0.315714268 container start 1b7c1db85494b784c01398008fc48864ac8642229ba347020f85b1db5b6540da (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:39:44 compute-0 neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f[151747]: [NOTICE]   (151752) : New worker (151754) forked
Oct 08 16:39:44 compute-0 neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f[151747]: [NOTICE]   (151752) : Loading success.
Oct 08 16:39:45 compute-0 ovn_controller[19768]: 2025-10-08T16:39:45Z|00226|binding|INFO|Claiming lport 037479b5-3904-4ab8-b4c7-8ce540406327 for this chassis.
Oct 08 16:39:45 compute-0 ovn_controller[19768]: 2025-10-08T16:39:45Z|00227|binding|INFO|037479b5-3904-4ab8-b4c7-8ce540406327: Claiming fa:16:3e:ec:97:a5 10.100.0.13
Oct 08 16:39:45 compute-0 ovn_controller[19768]: 2025-10-08T16:39:45Z|00228|binding|INFO|Setting lport 037479b5-3904-4ab8-b4c7-8ce540406327 up in Southbound
Oct 08 16:39:46 compute-0 podman[151763]: 2025-10-08 16:39:46.454107983 +0000 UTC m=+0.056809279 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=iscsid)
Oct 08 16:39:46 compute-0 nova_compute[117413]: 2025-10-08 16:39:46.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:46 compute-0 nova_compute[117413]: 2025-10-08 16:39:46.745 2 INFO nova.compute.manager [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Post operation of migration started
Oct 08 16:39:46 compute-0 nova_compute[117413]: 2025-10-08 16:39:46.745 2 WARNING neutronclient.v2_0.client [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:39:47 compute-0 nova_compute[117413]: 2025-10-08 16:39:47.054 2 WARNING neutronclient.v2_0.client [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:39:47 compute-0 nova_compute[117413]: 2025-10-08 16:39:47.054 2 WARNING neutronclient.v2_0.client [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:39:47 compute-0 nova_compute[117413]: 2025-10-08 16:39:47.141 2 DEBUG oslo_concurrency.lockutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:39:47 compute-0 nova_compute[117413]: 2025-10-08 16:39:47.141 2 DEBUG oslo_concurrency.lockutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:39:47 compute-0 nova_compute[117413]: 2025-10-08 16:39:47.142 2 DEBUG nova.network.neutron [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:39:47 compute-0 nova_compute[117413]: 2025-10-08 16:39:47.654 2 WARNING neutronclient.v2_0.client [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:39:48 compute-0 podman[151784]: 2025-10-08 16:39:48.464229986 +0000 UTC m=+0.072883245 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 08 16:39:48 compute-0 nova_compute[117413]: 2025-10-08 16:39:48.695 2 WARNING neutronclient.v2_0.client [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:39:48 compute-0 nova_compute[117413]: 2025-10-08 16:39:48.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:49 compute-0 nova_compute[117413]: 2025-10-08 16:39:49.261 2 DEBUG nova.network.neutron [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Updating instance_info_cache with network_info: [{"id": "037479b5-3904-4ab8-b4c7-8ce540406327", "address": "fa:16:3e:ec:97:a5", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037479b5-39", "ovs_interfaceid": "037479b5-3904-4ab8-b4c7-8ce540406327", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:39:49 compute-0 nova_compute[117413]: 2025-10-08 16:39:49.769 2 DEBUG oslo_concurrency.lockutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:39:50 compute-0 nova_compute[117413]: 2025-10-08 16:39:50.292 2 DEBUG oslo_concurrency.lockutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:39:50 compute-0 nova_compute[117413]: 2025-10-08 16:39:50.293 2 DEBUG oslo_concurrency.lockutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:39:50 compute-0 nova_compute[117413]: 2025-10-08 16:39:50.293 2 DEBUG oslo_concurrency.lockutils [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:39:50 compute-0 nova_compute[117413]: 2025-10-08 16:39:50.298 2 INFO nova.virt.libvirt.driver [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 08 16:39:50 compute-0 virtqemud[117740]: Domain id=19 name='instance-0000001b' uuid=bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21 is tainted: custom-monitor
Oct 08 16:39:51 compute-0 nova_compute[117413]: 2025-10-08 16:39:51.307 2 INFO nova.virt.libvirt.driver [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 08 16:39:51 compute-0 nova_compute[117413]: 2025-10-08 16:39:51.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:52 compute-0 nova_compute[117413]: 2025-10-08 16:39:52.315 2 INFO nova.virt.libvirt.driver [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 08 16:39:52 compute-0 nova_compute[117413]: 2025-10-08 16:39:52.321 2 DEBUG nova.compute.manager [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:39:52 compute-0 nova_compute[117413]: 2025-10-08 16:39:52.832 2 DEBUG nova.objects.instance [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Oct 08 16:39:53 compute-0 podman[151806]: 2025-10-08 16:39:53.455946499 +0000 UTC m=+0.057284222 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 08 16:39:53 compute-0 podman[151807]: 2025-10-08 16:39:53.548992828 +0000 UTC m=+0.134470211 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller)
Oct 08 16:39:53 compute-0 nova_compute[117413]: 2025-10-08 16:39:53.855 2 WARNING neutronclient.v2_0.client [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:39:53 compute-0 nova_compute[117413]: 2025-10-08 16:39:53.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:54 compute-0 nova_compute[117413]: 2025-10-08 16:39:54.061 2 WARNING neutronclient.v2_0.client [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:39:54 compute-0 nova_compute[117413]: 2025-10-08 16:39:54.062 2 WARNING neutronclient.v2_0.client [None req-68b74b0e-30e2-45dc-8e88-a5fb237c7aab ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:39:56 compute-0 nova_compute[117413]: 2025-10-08 16:39:56.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:59 compute-0 nova_compute[117413]: 2025-10-08 16:39:59.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:39:59 compute-0 podman[127881]: time="2025-10-08T16:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:39:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:39:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3492 "" "Go-http-client/1.1"
Oct 08 16:40:01 compute-0 openstack_network_exporter[130039]: ERROR   16:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:40:01 compute-0 openstack_network_exporter[130039]: ERROR   16:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:40:01 compute-0 openstack_network_exporter[130039]: ERROR   16:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:40:01 compute-0 openstack_network_exporter[130039]: ERROR   16:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:40:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:40:01 compute-0 openstack_network_exporter[130039]: ERROR   16:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:40:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:40:01 compute-0 nova_compute[117413]: 2025-10-08 16:40:01.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:01 compute-0 nova_compute[117413]: 2025-10-08 16:40:01.983 2 DEBUG nova.compute.manager [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp6lsr992a',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='33416544-25f6-4028-a159-162014dbffea',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9311
Oct 08 16:40:02 compute-0 nova_compute[117413]: 2025-10-08 16:40:02.998 2 DEBUG oslo_concurrency.lockutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-33416544-25f6-4028-a159-162014dbffea" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:40:02 compute-0 nova_compute[117413]: 2025-10-08 16:40:02.999 2 DEBUG oslo_concurrency.lockutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-33416544-25f6-4028-a159-162014dbffea" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:40:02 compute-0 nova_compute[117413]: 2025-10-08 16:40:02.999 2 DEBUG nova.network.neutron [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:40:03 compute-0 nova_compute[117413]: 2025-10-08 16:40:03.506 2 WARNING neutronclient.v2_0.client [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:40:04 compute-0 nova_compute[117413]: 2025-10-08 16:40:04.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:04 compute-0 nova_compute[117413]: 2025-10-08 16:40:04.321 2 WARNING neutronclient.v2_0.client [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:40:04 compute-0 nova_compute[117413]: 2025-10-08 16:40:04.505 2 DEBUG nova.network.neutron [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Updating instance_info_cache with network_info: [{"id": "e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d", "address": "fa:16:3e:b3:24:60", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2fe6b5b-61", "ovs_interfaceid": "e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.014 2 DEBUG oslo_concurrency.lockutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-33416544-25f6-4028-a159-162014dbffea" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.030 2 DEBUG nova.virt.libvirt.driver [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp6lsr992a',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='33416544-25f6-4028-a159-162014dbffea',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11737
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.030 2 DEBUG nova.virt.libvirt.driver [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Creating instance directory: /var/lib/nova/instances/33416544-25f6-4028-a159-162014dbffea pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11750
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.031 2 DEBUG nova.virt.libvirt.driver [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Creating disk.info with the contents: {'/var/lib/nova/instances/33416544-25f6-4028-a159-162014dbffea/disk': 'qcow2', '/var/lib/nova/instances/33416544-25f6-4028-a159-162014dbffea/disk.config': 'raw'} pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11764
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.031 2 DEBUG nova.virt.libvirt.driver [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11774
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.032 2 DEBUG nova.objects.instance [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 33416544-25f6-4028-a159-162014dbffea obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:40:05 compute-0 systemd[1]: Starting system activity accounting tool...
Oct 08 16:40:05 compute-0 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct 08 16:40:05 compute-0 systemd[1]: Finished system activity accounting tool.
Oct 08 16:40:05 compute-0 podman[151854]: 2025-10-08 16:40:05.463921195 +0000 UTC m=+0.058421746 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_id=multipathd, container_name=multipathd, tcib_managed=true)
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.539 2 DEBUG oslo_utils.imageutils.format_inspector [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.542 2 DEBUG oslo_utils.imageutils.format_inspector [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.544 2 DEBUG oslo_concurrency.processutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.618 2 DEBUG oslo_concurrency.processutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.619 2 DEBUG oslo_concurrency.lockutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.619 2 DEBUG oslo_concurrency.lockutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.621 2 DEBUG oslo_utils.imageutils.format_inspector [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.624 2 DEBUG oslo_utils.imageutils.format_inspector [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.624 2 DEBUG oslo_concurrency.processutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.676 2 DEBUG oslo_concurrency.processutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.677 2 DEBUG oslo_concurrency.processutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/33416544-25f6-4028-a159-162014dbffea/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.717 2 DEBUG oslo_concurrency.processutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/33416544-25f6-4028-a159-162014dbffea/disk 1073741824" returned: 0 in 0.039s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.718 2 DEBUG oslo_concurrency.lockutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.099s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.719 2 DEBUG oslo_concurrency.processutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.780 2 DEBUG oslo_concurrency.processutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.781 2 DEBUG nova.virt.disk.api [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Checking if we can resize image /var/lib/nova/instances/33416544-25f6-4028-a159-162014dbffea/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.781 2 DEBUG oslo_concurrency.processutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/33416544-25f6-4028-a159-162014dbffea/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.852 2 DEBUG oslo_concurrency.processutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/33416544-25f6-4028-a159-162014dbffea/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.854 2 DEBUG nova.virt.disk.api [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Cannot resize image /var/lib/nova/instances/33416544-25f6-4028-a159-162014dbffea/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:40:05 compute-0 nova_compute[117413]: 2025-10-08 16:40:05.854 2 DEBUG nova.objects.instance [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'migration_context' on Instance uuid 33416544-25f6-4028-a159-162014dbffea obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.362 2 DEBUG nova.objects.base [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Object Instance<33416544-25f6-4028-a159-162014dbffea> lazy-loaded attributes: trusted_certs,migration_context wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.362 2 DEBUG oslo_concurrency.processutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/33416544-25f6-4028-a159-162014dbffea/disk.config 497664 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.403 2 DEBUG oslo_concurrency.processutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/33416544-25f6-4028-a159-162014dbffea/disk.config 497664" returned: 0 in 0.040s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.404 2 DEBUG nova.virt.libvirt.driver [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11704
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.406 2 DEBUG nova.virt.libvirt.vif [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-08T16:38:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-1687199385',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-1687199',id=26,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:38:41Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a137143db2f84a9f89a9bfc5d20558d0',ramdisk_id='',reservation_id='r-gwrt0ri3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-817748354',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-817748354-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:38:41Z,user_data=None,user_id='aa1d2ca0056143d982599cc1b9f8587d',uuid=33416544-25f6-4028-a159-162014dbffea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d", "address": "fa:16:3e:b3:24:60", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tape2fe6b5b-61", "ovs_interfaceid": "e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.407 2 DEBUG nova.network.os_vif_util [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converting VIF {"id": "e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d", "address": "fa:16:3e:b3:24:60", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tape2fe6b5b-61", "ovs_interfaceid": "e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.408 2 DEBUG nova.network.os_vif_util [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:24:60,bridge_name='br-int',has_traffic_filtering=True,id=e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2fe6b5b-61') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.409 2 DEBUG os_vif [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:24:60,bridge_name='br-int',has_traffic_filtering=True,id=e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2fe6b5b-61') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.410 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.411 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.412 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '22b3f4e6-fed0-568f-bb55-57ebaa3e742b', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.418 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape2fe6b5b-61, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.419 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tape2fe6b5b-61, col_values=(('qos', UUID('a49b2a6d-6af3-4574-b5a4-26cab8335187')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.419 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tape2fe6b5b-61, col_values=(('external_ids', {'iface-id': 'e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:24:60', 'vm-uuid': '33416544-25f6-4028-a159-162014dbffea'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:06 compute-0 NetworkManager[1034]: <info>  [1759941606.4217] manager: (tape2fe6b5b-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.433 2 INFO os_vif [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:24:60,bridge_name='br-int',has_traffic_filtering=True,id=e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2fe6b5b-61')
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.433 2 DEBUG nova.virt.libvirt.driver [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11851
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.434 2 DEBUG nova.compute.manager [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp6lsr992a',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='33416544-25f6-4028-a159-162014dbffea',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9377
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.435 2 WARNING neutronclient.v2_0.client [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.529 2 WARNING neutronclient.v2_0.client [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:40:06 compute-0 nova_compute[117413]: 2025-10-08 16:40:06.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:07 compute-0 nova_compute[117413]: 2025-10-08 16:40:07.340 2 DEBUG nova.network.neutron [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Port e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.12/site-packages/nova/network/neutron.py:356
Oct 08 16:40:07 compute-0 nova_compute[117413]: 2025-10-08 16:40:07.358 2 DEBUG nova.compute.manager [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp6lsr992a',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='33416544-25f6-4028-a159-162014dbffea',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9443
Oct 08 16:40:10 compute-0 kernel: tape2fe6b5b-61: entered promiscuous mode
Oct 08 16:40:10 compute-0 NetworkManager[1034]: <info>  [1759941610.5621] manager: (tape2fe6b5b-61): new Tun device (/org/freedesktop/NetworkManager/Devices/85)
Oct 08 16:40:10 compute-0 nova_compute[117413]: 2025-10-08 16:40:10.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:10 compute-0 ovn_controller[19768]: 2025-10-08T16:40:10Z|00229|binding|INFO|Claiming lport e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d for this additional chassis.
Oct 08 16:40:10 compute-0 ovn_controller[19768]: 2025-10-08T16:40:10Z|00230|binding|INFO|e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d: Claiming fa:16:3e:b3:24:60 10.100.0.5
Oct 08 16:40:10 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:10.570 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:24:60 10.100.0.5'], port_security=['fa:16:3e:b3:24:60 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '33416544-25f6-4028-a159-162014dbffea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2742327d-2338-460e-952e-6446bba2b03f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a137143db2f84a9f89a9bfc5d20558d0', 'neutron:revision_number': '10', 'neutron:security_group_ids': '90e43186-0cc8-407e-ae2a-ee1daa701739', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d14e964-e6b6-456f-bb61-aa39c15f6e3f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d) old=Port_Binding(additional_chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:40:10 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:10.571 28633 INFO neutron.agent.ovn.metadata.agent [-] Port e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d in datapath 2742327d-2338-460e-952e-6446bba2b03f unbound from our chassis
Oct 08 16:40:10 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:10.572 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2742327d-2338-460e-952e-6446bba2b03f
Oct 08 16:40:10 compute-0 ovn_controller[19768]: 2025-10-08T16:40:10Z|00231|binding|INFO|Setting lport e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d ovn-installed in OVS
Oct 08 16:40:10 compute-0 nova_compute[117413]: 2025-10-08 16:40:10.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:10 compute-0 nova_compute[117413]: 2025-10-08 16:40:10.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:10 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:10.601 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0fd81c-7a9f-436d-b8a0-ee62bf0d3f18]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:10 compute-0 systemd-udevd[151929]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:40:10 compute-0 NetworkManager[1034]: <info>  [1759941610.6192] device (tape2fe6b5b-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:40:10 compute-0 NetworkManager[1034]: <info>  [1759941610.6205] device (tape2fe6b5b-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:40:10 compute-0 systemd-machined[77548]: New machine qemu-20-instance-0000001a.
Oct 08 16:40:10 compute-0 podman[151895]: 2025-10-08 16:40:10.639288244 +0000 UTC m=+0.112862955 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, vendor=Red Hat, Inc., release=1755695350, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git)
Oct 08 16:40:10 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:10.639 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[081b520a-36e9-4dff-ac47-3bb44eaab03d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:10 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-0000001a.
Oct 08 16:40:10 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:10.643 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[3a204c5d-088e-4587-ba56-121aa63b1f64]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:10 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:10.685 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[2781e8d7-9f5e-4a50-84d9-748cbedb1af4]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:10 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:10.707 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a4e15d-2df7-49f9-9b2e-66bda98643aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2742327d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:54:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 1672, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 1672, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 287210, 'reachable_time': 39805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 151943, 'error': None, 'target': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:10 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:10.731 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[f5a4a6dc-1a4b-4c0f-a5b0-2194e5de4946]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2742327d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 287226, 'tstamp': 287226}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 151945, 'error': None, 'target': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2742327d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 287230, 'tstamp': 287230}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 151945, 'error': None, 'target': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:10 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:10.733 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2742327d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:40:10 compute-0 nova_compute[117413]: 2025-10-08 16:40:10.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:10 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:10.737 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2742327d-20, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:40:10 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:10.737 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:40:10 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:10.737 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2742327d-20, col_values=(('external_ids', {'iface-id': '8a2f4b59-05b7-414e-b353-f44ee56d820a'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:40:10 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:10.738 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:40:10 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:10.740 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[f223de1d-eedb-4972-86dd-3006289e998f]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-2742327d-2338-460e-952e-6446bba2b03f\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 2742327d-2338-460e-952e-6446bba2b03f\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:11 compute-0 nova_compute[117413]: 2025-10-08 16:40:11.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:11 compute-0 nova_compute[117413]: 2025-10-08 16:40:11.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:12 compute-0 nova_compute[117413]: 2025-10-08 16:40:12.868 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:40:13 compute-0 nova_compute[117413]: 2025-10-08 16:40:13.381 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:40:13 compute-0 nova_compute[117413]: 2025-10-08 16:40:13.382 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:40:13 compute-0 nova_compute[117413]: 2025-10-08 16:40:13.382 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:40:13 compute-0 nova_compute[117413]: 2025-10-08 16:40:13.382 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:40:14 compute-0 ovn_controller[19768]: 2025-10-08T16:40:14Z|00232|binding|INFO|Claiming lport e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d for this chassis.
Oct 08 16:40:14 compute-0 ovn_controller[19768]: 2025-10-08T16:40:14Z|00233|binding|INFO|e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d: Claiming fa:16:3e:b3:24:60 10.100.0.5
Oct 08 16:40:14 compute-0 ovn_controller[19768]: 2025-10-08T16:40:14Z|00234|binding|INFO|Setting lport e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d up in Southbound
Oct 08 16:40:14 compute-0 nova_compute[117413]: 2025-10-08 16:40:14.434 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:40:14 compute-0 nova_compute[117413]: 2025-10-08 16:40:14.494 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:40:14 compute-0 nova_compute[117413]: 2025-10-08 16:40:14.495 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:40:14 compute-0 nova_compute[117413]: 2025-10-08 16:40:14.567 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:40:14 compute-0 nova_compute[117413]: 2025-10-08 16:40:14.573 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/33416544-25f6-4028-a159-162014dbffea/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:40:14 compute-0 nova_compute[117413]: 2025-10-08 16:40:14.633 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/33416544-25f6-4028-a159-162014dbffea/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:40:14 compute-0 nova_compute[117413]: 2025-10-08 16:40:14.635 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/33416544-25f6-4028-a159-162014dbffea/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:40:14 compute-0 nova_compute[117413]: 2025-10-08 16:40:14.718 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/33416544-25f6-4028-a159-162014dbffea/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:40:14 compute-0 nova_compute[117413]: 2025-10-08 16:40:14.886 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:40:14 compute-0 nova_compute[117413]: 2025-10-08 16:40:14.888 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:40:14 compute-0 nova_compute[117413]: 2025-10-08 16:40:14.924 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.036s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:40:14 compute-0 nova_compute[117413]: 2025-10-08 16:40:14.925 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5835MB free_disk=73.19277954101562GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:40:14 compute-0 nova_compute[117413]: 2025-10-08 16:40:14.926 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:40:14 compute-0 nova_compute[117413]: 2025-10-08 16:40:14.926 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:40:15 compute-0 nova_compute[117413]: 2025-10-08 16:40:15.509 2 INFO nova.compute.manager [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Post operation of migration started
Oct 08 16:40:15 compute-0 nova_compute[117413]: 2025-10-08 16:40:15.510 2 WARNING neutronclient.v2_0.client [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:40:16 compute-0 nova_compute[117413]: 2025-10-08 16:40:16.261 2 WARNING neutronclient.v2_0.client [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:40:16 compute-0 nova_compute[117413]: 2025-10-08 16:40:16.261 2 WARNING neutronclient.v2_0.client [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:40:16 compute-0 nova_compute[117413]: 2025-10-08 16:40:16.356 2 DEBUG oslo_concurrency.lockutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-33416544-25f6-4028-a159-162014dbffea" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:40:16 compute-0 nova_compute[117413]: 2025-10-08 16:40:16.356 2 DEBUG oslo_concurrency.lockutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-33416544-25f6-4028-a159-162014dbffea" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:40:16 compute-0 nova_compute[117413]: 2025-10-08 16:40:16.357 2 DEBUG nova.network.neutron [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:40:16 compute-0 nova_compute[117413]: 2025-10-08 16:40:16.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:16 compute-0 nova_compute[117413]: 2025-10-08 16:40:16.452 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Migration for instance 33416544-25f6-4028-a159-162014dbffea refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 08 16:40:16 compute-0 nova_compute[117413]: 2025-10-08 16:40:16.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:16 compute-0 nova_compute[117413]: 2025-10-08 16:40:16.865 2 WARNING neutronclient.v2_0.client [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:40:16 compute-0 nova_compute[117413]: 2025-10-08 16:40:16.960 2 INFO nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] [instance: 33416544-25f6-4028-a159-162014dbffea] Updating resource usage from migration dd327f4f-a663-4565-843e-9c520384a75f
Oct 08 16:40:16 compute-0 nova_compute[117413]: 2025-10-08 16:40:16.960 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] [instance: 33416544-25f6-4028-a159-162014dbffea] Starting to track incoming migration dd327f4f-a663-4565-843e-9c520384a75f with flavor 43cd5d45-bd07-4889-a671-dd23291090c1 _update_usage_from_migration /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1536
Oct 08 16:40:17 compute-0 nova_compute[117413]: 2025-10-08 16:40:17.344 2 WARNING neutronclient.v2_0.client [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:40:17 compute-0 podman[151981]: 2025-10-08 16:40:17.472614471 +0000 UTC m=+0.067705035 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 08 16:40:17 compute-0 nova_compute[117413]: 2025-10-08 16:40:17.503 2 DEBUG nova.network.neutron [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Updating instance_info_cache with network_info: [{"id": "e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d", "address": "fa:16:3e:b3:24:60", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2fe6b5b-61", "ovs_interfaceid": "e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:40:17 compute-0 nova_compute[117413]: 2025-10-08 16:40:17.592 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:40:18 compute-0 nova_compute[117413]: 2025-10-08 16:40:18.012 2 DEBUG oslo_concurrency.lockutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-33416544-25f6-4028-a159-162014dbffea" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:40:18 compute-0 nova_compute[117413]: 2025-10-08 16:40:18.099 2 WARNING nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance 33416544-25f6-4028-a159-162014dbffea has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.
Oct 08 16:40:18 compute-0 nova_compute[117413]: 2025-10-08 16:40:18.100 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:40:18 compute-0 nova_compute[117413]: 2025-10-08 16:40:18.100 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:40:14 up 48 min,  0 user,  load average: 0.81, 0.22, 0.20\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_a137143db2f84a9f89a9bfc5d20558d0': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:40:18 compute-0 nova_compute[117413]: 2025-10-08 16:40:18.182 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing inventories for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Oct 08 16:40:18 compute-0 nova_compute[117413]: 2025-10-08 16:40:18.241 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating ProviderTree inventory for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Oct 08 16:40:18 compute-0 nova_compute[117413]: 2025-10-08 16:40:18.242 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating inventory in ProviderTree for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 08 16:40:18 compute-0 nova_compute[117413]: 2025-10-08 16:40:18.255 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing aggregate associations for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Oct 08 16:40:18 compute-0 nova_compute[117413]: 2025-10-08 16:40:18.275 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing trait associations for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8, traits: HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIRTIO_PACKED,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI2,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_SB16,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_ARCH_X86_64,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_SOUND_MODEL_ES1370,HW_ARCH_X86_64,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_VIRTIO_FS,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_TIS,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,HW_CPU_X86_MMX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_IGB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOUND_MODEL_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_SOUND_MODEL_AC97,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ADDRESS_SPACE_EMULATED,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOUND_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_CRB,HW_CPU_X86_SSE42,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_STORAGE_BUS_IDE _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Oct 08 16:40:18 compute-0 nova_compute[117413]: 2025-10-08 16:40:18.325 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:40:18 compute-0 nova_compute[117413]: 2025-10-08 16:40:18.539 2 DEBUG oslo_concurrency.lockutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:40:18 compute-0 nova_compute[117413]: 2025-10-08 16:40:18.833 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:40:19 compute-0 nova_compute[117413]: 2025-10-08 16:40:19.342 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:40:19 compute-0 nova_compute[117413]: 2025-10-08 16:40:19.343 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.417s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:40:19 compute-0 nova_compute[117413]: 2025-10-08 16:40:19.343 2 DEBUG oslo_concurrency.lockutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.804s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:40:19 compute-0 nova_compute[117413]: 2025-10-08 16:40:19.343 2 DEBUG oslo_concurrency.lockutils [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:40:19 compute-0 nova_compute[117413]: 2025-10-08 16:40:19.348 2 INFO nova.virt.libvirt.driver [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 08 16:40:19 compute-0 virtqemud[117740]: Domain id=20 name='instance-0000001a' uuid=33416544-25f6-4028-a159-162014dbffea is tainted: custom-monitor
Oct 08 16:40:19 compute-0 podman[152000]: 2025-10-08 16:40:19.475338599 +0000 UTC m=+0.073774541 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 08 16:40:20 compute-0 nova_compute[117413]: 2025-10-08 16:40:20.361 2 INFO nova.virt.libvirt.driver [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 08 16:40:20 compute-0 nova_compute[117413]: 2025-10-08 16:40:20.838 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:40:20 compute-0 nova_compute[117413]: 2025-10-08 16:40:20.838 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:40:20 compute-0 nova_compute[117413]: 2025-10-08 16:40:20.838 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:40:20 compute-0 nova_compute[117413]: 2025-10-08 16:40:20.839 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:40:20 compute-0 nova_compute[117413]: 2025-10-08 16:40:20.839 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:40:20 compute-0 nova_compute[117413]: 2025-10-08 16:40:20.839 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:40:21 compute-0 nova_compute[117413]: 2025-10-08 16:40:21.366 2 INFO nova.virt.libvirt.driver [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 08 16:40:21 compute-0 nova_compute[117413]: 2025-10-08 16:40:21.371 2 DEBUG nova.compute.manager [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:40:21 compute-0 nova_compute[117413]: 2025-10-08 16:40:21.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:21 compute-0 nova_compute[117413]: 2025-10-08 16:40:21.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:21 compute-0 nova_compute[117413]: 2025-10-08 16:40:21.882 2 DEBUG nova.objects.instance [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Oct 08 16:40:22 compute-0 nova_compute[117413]: 2025-10-08 16:40:22.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:40:22 compute-0 nova_compute[117413]: 2025-10-08 16:40:22.902 2 WARNING neutronclient.v2_0.client [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:40:23 compute-0 nova_compute[117413]: 2025-10-08 16:40:23.031 2 WARNING neutronclient.v2_0.client [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:40:23 compute-0 nova_compute[117413]: 2025-10-08 16:40:23.032 2 WARNING neutronclient.v2_0.client [None req-7779f433-8e14-4609-8aa5-b46bf49dc181 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:40:24 compute-0 nova_compute[117413]: 2025-10-08 16:40:24.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:40:24 compute-0 podman[152019]: 2025-10-08 16:40:24.452322304 +0000 UTC m=+0.060833915 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:40:24 compute-0 podman[152020]: 2025-10-08 16:40:24.521414088 +0000 UTC m=+0.116094008 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Oct 08 16:40:25 compute-0 nova_compute[117413]: 2025-10-08 16:40:25.412 2 DEBUG oslo_concurrency.lockutils [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Acquiring lock "bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:40:25 compute-0 nova_compute[117413]: 2025-10-08 16:40:25.413 2 DEBUG oslo_concurrency.lockutils [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:40:25 compute-0 nova_compute[117413]: 2025-10-08 16:40:25.413 2 DEBUG oslo_concurrency.lockutils [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Acquiring lock "bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:40:25 compute-0 nova_compute[117413]: 2025-10-08 16:40:25.414 2 DEBUG oslo_concurrency.lockutils [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:40:25 compute-0 nova_compute[117413]: 2025-10-08 16:40:25.414 2 DEBUG oslo_concurrency.lockutils [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:40:25 compute-0 nova_compute[117413]: 2025-10-08 16:40:25.429 2 INFO nova.compute.manager [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Terminating instance
Oct 08 16:40:25 compute-0 nova_compute[117413]: 2025-10-08 16:40:25.949 2 DEBUG nova.compute.manager [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:40:25 compute-0 kernel: tap037479b5-39 (unregistering): left promiscuous mode
Oct 08 16:40:25 compute-0 NetworkManager[1034]: <info>  [1759941625.9917] device (tap037479b5-39): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:26 compute-0 ovn_controller[19768]: 2025-10-08T16:40:26Z|00235|binding|INFO|Releasing lport 037479b5-3904-4ab8-b4c7-8ce540406327 from this chassis (sb_readonly=0)
Oct 08 16:40:26 compute-0 ovn_controller[19768]: 2025-10-08T16:40:26Z|00236|binding|INFO|Setting lport 037479b5-3904-4ab8-b4c7-8ce540406327 down in Southbound
Oct 08 16:40:26 compute-0 ovn_controller[19768]: 2025-10-08T16:40:26Z|00237|binding|INFO|Removing iface tap037479b5-39 ovn-installed in OVS
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:26.010 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:97:a5 10.100.0.13'], port_security=['fa:16:3e:ec:97:a5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2742327d-2338-460e-952e-6446bba2b03f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a137143db2f84a9f89a9bfc5d20558d0', 'neutron:revision_number': '15', 'neutron:security_group_ids': '90e43186-0cc8-407e-ae2a-ee1daa701739', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d14e964-e6b6-456f-bb61-aa39c15f6e3f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=037479b5-3904-4ab8-b4c7-8ce540406327) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:40:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:26.011 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 037479b5-3904-4ab8-b4c7-8ce540406327 in datapath 2742327d-2338-460e-952e-6446bba2b03f unbound from our chassis
Oct 08 16:40:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:26.012 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2742327d-2338-460e-952e-6446bba2b03f
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:26.034 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[6bf3c0b1-1758-4740-ab04-99573c870f41]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:26 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Oct 08 16:40:26 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000001b.scope: Consumed 3.028s CPU time.
Oct 08 16:40:26 compute-0 systemd-machined[77548]: Machine qemu-19-instance-0000001b terminated.
Oct 08 16:40:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:26.076 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[1a208d7e-a957-4061-9895-ec2194843385]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:26.080 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[a4dd1370-c700-406d-9cff-361bc4465086]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:26.122 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[19bc80f5-210b-42ae-83d5-abb037411c89]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:26.146 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[a88768c0-2a20-48d1-8839-2b706fe624bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2742327d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:54:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 48, 'tx_packets': 7, 'rx_bytes': 2512, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 48, 'tx_packets': 7, 'rx_bytes': 2512, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 287210, 'reachable_time': 39805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 152079, 'error': None, 'target': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:26.168 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[96cbc1df-f0e2-4e52-b5e3-5a73414eed2a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2742327d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 287226, 'tstamp': 287226}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 152080, 'error': None, 'target': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2742327d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 287230, 'tstamp': 287230}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 152080, 'error': None, 'target': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:26.169 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2742327d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:26.179 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2742327d-20, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:40:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:26.179 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:40:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:26.180 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2742327d-20, col_values=(('external_ids', {'iface-id': '8a2f4b59-05b7-414e-b353-f44ee56d820a'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:40:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:26.180 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:40:26 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:26.181 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[0697611f-c40b-46c1-aeb4-632113f971ae]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-2742327d-2338-460e-952e-6446bba2b03f\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 2742327d-2338-460e-952e-6446bba2b03f\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.217 2 INFO nova.virt.libvirt.driver [-] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Instance destroyed successfully.
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.218 2 DEBUG nova.objects.instance [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lazy-loading 'resources' on Instance uuid bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.333 2 DEBUG nova.compute.manager [req-7548e6f5-fd96-4842-958a-a4567c1e401c req-cde7eabd-6c06-4e02-802c-78f5e424a56e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Received event network-vif-unplugged-037479b5-3904-4ab8-b4c7-8ce540406327 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.333 2 DEBUG oslo_concurrency.lockutils [req-7548e6f5-fd96-4842-958a-a4567c1e401c req-cde7eabd-6c06-4e02-802c-78f5e424a56e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.333 2 DEBUG oslo_concurrency.lockutils [req-7548e6f5-fd96-4842-958a-a4567c1e401c req-cde7eabd-6c06-4e02-802c-78f5e424a56e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.334 2 DEBUG oslo_concurrency.lockutils [req-7548e6f5-fd96-4842-958a-a4567c1e401c req-cde7eabd-6c06-4e02-802c-78f5e424a56e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.334 2 DEBUG nova.compute.manager [req-7548e6f5-fd96-4842-958a-a4567c1e401c req-cde7eabd-6c06-4e02-802c-78f5e424a56e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] No waiting events found dispatching network-vif-unplugged-037479b5-3904-4ab8-b4c7-8ce540406327 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.334 2 DEBUG nova.compute.manager [req-7548e6f5-fd96-4842-958a-a4567c1e401c req-cde7eabd-6c06-4e02-802c-78f5e424a56e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Received event network-vif-unplugged-037479b5-3904-4ab8-b4c7-8ce540406327 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.726 2 DEBUG nova.virt.libvirt.vif [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,compute_id=2,config_drive='True',created_at=2025-10-08T16:38:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-739822020',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-7398220',id=27,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:39:00Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a137143db2f84a9f89a9bfc5d20558d0',ramdisk_id='',reservation_id='r-20y21oyq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',clean_attempts='1',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-817748354',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-817748354-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:39:53Z,user_data=None,user_id='aa1d2ca0056143d982599cc1b9f8587d',uuid=bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "037479b5-3904-4ab8-b4c7-8ce540406327", "address": "fa:16:3e:ec:97:a5", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037479b5-39", "ovs_interfaceid": "037479b5-3904-4ab8-b4c7-8ce540406327", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.727 2 DEBUG nova.network.os_vif_util [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Converting VIF {"id": "037479b5-3904-4ab8-b4c7-8ce540406327", "address": "fa:16:3e:ec:97:a5", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap037479b5-39", "ovs_interfaceid": "037479b5-3904-4ab8-b4c7-8ce540406327", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.727 2 DEBUG nova.network.os_vif_util [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:97:a5,bridge_name='br-int',has_traffic_filtering=True,id=037479b5-3904-4ab8-b4c7-8ce540406327,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037479b5-39') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.728 2 DEBUG os_vif [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:97:a5,bridge_name='br-int',has_traffic_filtering=True,id=037479b5-3904-4ab8-b4c7-8ce540406327,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037479b5-39') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.730 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap037479b5-39, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.733 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=fb351db6-b568-4f0f-9bb7-460980e79604) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.736 2 INFO os_vif [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:97:a5,bridge_name='br-int',has_traffic_filtering=True,id=037479b5-3904-4ab8-b4c7-8ce540406327,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap037479b5-39')
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.737 2 INFO nova.virt.libvirt.driver [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Deleting instance files /var/lib/nova/instances/bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21_del
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.738 2 INFO nova.virt.libvirt.driver [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Deletion of /var/lib/nova/instances/bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21_del complete
Oct 08 16:40:26 compute-0 nova_compute[117413]: 2025-10-08 16:40:26.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:27 compute-0 nova_compute[117413]: 2025-10-08 16:40:27.250 2 INFO nova.compute.manager [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Took 1.30 seconds to destroy the instance on the hypervisor.
Oct 08 16:40:27 compute-0 nova_compute[117413]: 2025-10-08 16:40:27.251 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:40:27 compute-0 nova_compute[117413]: 2025-10-08 16:40:27.251 2 DEBUG nova.compute.manager [-] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:40:27 compute-0 nova_compute[117413]: 2025-10-08 16:40:27.251 2 DEBUG nova.network.neutron [-] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:40:27 compute-0 nova_compute[117413]: 2025-10-08 16:40:27.252 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:40:28 compute-0 nova_compute[117413]: 2025-10-08 16:40:28.266 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:40:28 compute-0 nova_compute[117413]: 2025-10-08 16:40:28.402 2 DEBUG nova.compute.manager [req-54528f1a-f3f3-4453-9417-f1c37df10ad7 req-801e79e7-f69a-417a-902f-74fe822ee5ba c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Received event network-vif-unplugged-037479b5-3904-4ab8-b4c7-8ce540406327 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:40:28 compute-0 nova_compute[117413]: 2025-10-08 16:40:28.402 2 DEBUG oslo_concurrency.lockutils [req-54528f1a-f3f3-4453-9417-f1c37df10ad7 req-801e79e7-f69a-417a-902f-74fe822ee5ba c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:40:28 compute-0 nova_compute[117413]: 2025-10-08 16:40:28.403 2 DEBUG oslo_concurrency.lockutils [req-54528f1a-f3f3-4453-9417-f1c37df10ad7 req-801e79e7-f69a-417a-902f-74fe822ee5ba c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:40:28 compute-0 nova_compute[117413]: 2025-10-08 16:40:28.403 2 DEBUG oslo_concurrency.lockutils [req-54528f1a-f3f3-4453-9417-f1c37df10ad7 req-801e79e7-f69a-417a-902f-74fe822ee5ba c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:40:28 compute-0 nova_compute[117413]: 2025-10-08 16:40:28.403 2 DEBUG nova.compute.manager [req-54528f1a-f3f3-4453-9417-f1c37df10ad7 req-801e79e7-f69a-417a-902f-74fe822ee5ba c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] No waiting events found dispatching network-vif-unplugged-037479b5-3904-4ab8-b4c7-8ce540406327 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:40:28 compute-0 nova_compute[117413]: 2025-10-08 16:40:28.403 2 DEBUG nova.compute.manager [req-54528f1a-f3f3-4453-9417-f1c37df10ad7 req-801e79e7-f69a-417a-902f-74fe822ee5ba c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Received event network-vif-unplugged-037479b5-3904-4ab8-b4c7-8ce540406327 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:40:29 compute-0 nova_compute[117413]: 2025-10-08 16:40:29.091 2 DEBUG nova.network.neutron [-] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:40:29 compute-0 nova_compute[117413]: 2025-10-08 16:40:29.597 2 INFO nova.compute.manager [-] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Took 2.35 seconds to deallocate network for instance.
Oct 08 16:40:29 compute-0 podman[127881]: time="2025-10-08T16:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:40:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:40:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3500 "" "Go-http-client/1.1"
Oct 08 16:40:30 compute-0 nova_compute[117413]: 2025-10-08 16:40:30.121 2 DEBUG oslo_concurrency.lockutils [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:40:30 compute-0 nova_compute[117413]: 2025-10-08 16:40:30.122 2 DEBUG oslo_concurrency.lockutils [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:40:30 compute-0 nova_compute[117413]: 2025-10-08 16:40:30.183 2 DEBUG nova.compute.provider_tree [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:40:30 compute-0 nova_compute[117413]: 2025-10-08 16:40:30.465 2 DEBUG nova.compute.manager [req-09bb2be2-c9b7-4ac1-a910-7c1646659c67 req-39eb058e-3c22-40e0-9fdd-9973af2585a6 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21] Received event network-vif-deleted-037479b5-3904-4ab8-b4c7-8ce540406327 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:40:30 compute-0 nova_compute[117413]: 2025-10-08 16:40:30.690 2 DEBUG nova.scheduler.client.report [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:40:31 compute-0 nova_compute[117413]: 2025-10-08 16:40:31.198 2 DEBUG oslo_concurrency.lockutils [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.076s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:40:31 compute-0 nova_compute[117413]: 2025-10-08 16:40:31.223 2 INFO nova.scheduler.client.report [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Deleted allocations for instance bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21
Oct 08 16:40:31 compute-0 openstack_network_exporter[130039]: ERROR   16:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:40:31 compute-0 openstack_network_exporter[130039]: ERROR   16:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:40:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:40:31 compute-0 openstack_network_exporter[130039]: ERROR   16:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:40:31 compute-0 openstack_network_exporter[130039]: ERROR   16:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:40:31 compute-0 openstack_network_exporter[130039]: ERROR   16:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:40:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:40:31 compute-0 nova_compute[117413]: 2025-10-08 16:40:31.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:31 compute-0 nova_compute[117413]: 2025-10-08 16:40:31.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:32 compute-0 nova_compute[117413]: 2025-10-08 16:40:32.269 2 DEBUG oslo_concurrency.lockutils [None req-90f30764-f96e-4c9d-aa46-4963ed22967d aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "bbe2b3e3-e2ae-4e70-b3a2-a8be9ffeec21" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.856s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:40:33 compute-0 nova_compute[117413]: 2025-10-08 16:40:33.342 2 DEBUG oslo_concurrency.lockutils [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Acquiring lock "33416544-25f6-4028-a159-162014dbffea" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:40:33 compute-0 nova_compute[117413]: 2025-10-08 16:40:33.343 2 DEBUG oslo_concurrency.lockutils [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "33416544-25f6-4028-a159-162014dbffea" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:40:33 compute-0 nova_compute[117413]: 2025-10-08 16:40:33.343 2 DEBUG oslo_concurrency.lockutils [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Acquiring lock "33416544-25f6-4028-a159-162014dbffea-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:40:33 compute-0 nova_compute[117413]: 2025-10-08 16:40:33.344 2 DEBUG oslo_concurrency.lockutils [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "33416544-25f6-4028-a159-162014dbffea-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:40:33 compute-0 nova_compute[117413]: 2025-10-08 16:40:33.344 2 DEBUG oslo_concurrency.lockutils [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "33416544-25f6-4028-a159-162014dbffea-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:40:33 compute-0 nova_compute[117413]: 2025-10-08 16:40:33.360 2 INFO nova.compute.manager [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Terminating instance
Oct 08 16:40:33 compute-0 nova_compute[117413]: 2025-10-08 16:40:33.874 2 DEBUG nova.compute.manager [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:40:33 compute-0 kernel: tape2fe6b5b-61 (unregistering): left promiscuous mode
Oct 08 16:40:33 compute-0 NetworkManager[1034]: <info>  [1759941633.9024] device (tape2fe6b5b-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:40:33 compute-0 ovn_controller[19768]: 2025-10-08T16:40:33Z|00238|binding|INFO|Releasing lport e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d from this chassis (sb_readonly=0)
Oct 08 16:40:33 compute-0 ovn_controller[19768]: 2025-10-08T16:40:33Z|00239|binding|INFO|Setting lport e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d down in Southbound
Oct 08 16:40:33 compute-0 ovn_controller[19768]: 2025-10-08T16:40:33Z|00240|binding|INFO|Removing iface tape2fe6b5b-61 ovn-installed in OVS
Oct 08 16:40:33 compute-0 nova_compute[117413]: 2025-10-08 16:40:33.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:33 compute-0 nova_compute[117413]: 2025-10-08 16:40:33.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:33.917 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:24:60 10.100.0.5'], port_security=['fa:16:3e:b3:24:60 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '33416544-25f6-4028-a159-162014dbffea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2742327d-2338-460e-952e-6446bba2b03f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a137143db2f84a9f89a9bfc5d20558d0', 'neutron:revision_number': '14', 'neutron:security_group_ids': '90e43186-0cc8-407e-ae2a-ee1daa701739', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d14e964-e6b6-456f-bb61-aa39c15f6e3f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:40:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:33.918 28633 INFO neutron.agent.ovn.metadata.agent [-] Port e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d in datapath 2742327d-2338-460e-952e-6446bba2b03f unbound from our chassis
Oct 08 16:40:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:33.919 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2742327d-2338-460e-952e-6446bba2b03f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:40:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:33.919 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[212f24e9-fafa-4893-a2fc-5d60467deb91]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:33.920 28633 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2742327d-2338-460e-952e-6446bba2b03f namespace which is not needed anymore
Oct 08 16:40:33 compute-0 nova_compute[117413]: 2025-10-08 16:40:33.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:33 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Oct 08 16:40:33 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000001a.scope: Consumed 2.468s CPU time.
Oct 08 16:40:33 compute-0 systemd-machined[77548]: Machine qemu-20-instance-0000001a terminated.
Oct 08 16:40:34 compute-0 neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f[151747]: [NOTICE]   (151752) : haproxy version is 3.0.5-8e879a5
Oct 08 16:40:34 compute-0 neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f[151747]: [NOTICE]   (151752) : path to executable is /usr/sbin/haproxy
Oct 08 16:40:34 compute-0 neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f[151747]: [WARNING]  (151752) : Exiting Master process...
Oct 08 16:40:34 compute-0 podman[152124]: 2025-10-08 16:40:34.04928896 +0000 UTC m=+0.036168000 container kill 1b7c1db85494b784c01398008fc48864ac8642229ba347020f85b1db5b6540da (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 08 16:40:34 compute-0 neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f[151747]: [ALERT]    (151752) : Current worker (151754) exited with code 143 (Terminated)
Oct 08 16:40:34 compute-0 neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f[151747]: [WARNING]  (151752) : All workers exited. Exiting... (0)
Oct 08 16:40:34 compute-0 systemd[1]: libpod-1b7c1db85494b784c01398008fc48864ac8642229ba347020f85b1db5b6540da.scope: Deactivated successfully.
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.137 2 INFO nova.virt.libvirt.driver [-] [instance: 33416544-25f6-4028-a159-162014dbffea] Instance destroyed successfully.
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.138 2 DEBUG nova.objects.instance [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lazy-loading 'resources' on Instance uuid 33416544-25f6-4028-a159-162014dbffea obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:40:34 compute-0 podman[152139]: 2025-10-08 16:40:34.33130493 +0000 UTC m=+0.259892769 container died 1b7c1db85494b784c01398008fc48864ac8642229ba347020f85b1db5b6540da (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.340 2 DEBUG nova.compute.manager [req-519f340e-125c-4b9b-b174-eb85315d224a req-d8820cbd-adbe-4ed3-b3b2-dfecabc471af c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Received event network-vif-unplugged-e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.340 2 DEBUG oslo_concurrency.lockutils [req-519f340e-125c-4b9b-b174-eb85315d224a req-d8820cbd-adbe-4ed3-b3b2-dfecabc471af c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "33416544-25f6-4028-a159-162014dbffea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.341 2 DEBUG oslo_concurrency.lockutils [req-519f340e-125c-4b9b-b174-eb85315d224a req-d8820cbd-adbe-4ed3-b3b2-dfecabc471af c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "33416544-25f6-4028-a159-162014dbffea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.341 2 DEBUG oslo_concurrency.lockutils [req-519f340e-125c-4b9b-b174-eb85315d224a req-d8820cbd-adbe-4ed3-b3b2-dfecabc471af c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "33416544-25f6-4028-a159-162014dbffea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.341 2 DEBUG nova.compute.manager [req-519f340e-125c-4b9b-b174-eb85315d224a req-d8820cbd-adbe-4ed3-b3b2-dfecabc471af c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] No waiting events found dispatching network-vif-unplugged-e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.342 2 DEBUG nova.compute.manager [req-519f340e-125c-4b9b-b174-eb85315d224a req-d8820cbd-adbe-4ed3-b3b2-dfecabc471af c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Received event network-vif-unplugged-e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.357 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.644 2 DEBUG nova.virt.libvirt.vif [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,compute_id=2,config_drive='True',created_at=2025-10-08T16:38:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-1687199385',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-1687199',id=26,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:38:41Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a137143db2f84a9f89a9bfc5d20558d0',ramdisk_id='',reservation_id='r-gwrt0ri3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',clean_attempts='1',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-817748354',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-817748354-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:40:22Z,user_data=None,user_id='aa1d2ca0056143d982599cc1b9f8587d',uuid=33416544-25f6-4028-a159-162014dbffea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d", "address": "fa:16:3e:b3:24:60", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2fe6b5b-61", "ovs_interfaceid": "e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.646 2 DEBUG nova.network.os_vif_util [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Converting VIF {"id": "e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d", "address": "fa:16:3e:b3:24:60", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2fe6b5b-61", "ovs_interfaceid": "e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.646 2 DEBUG nova.network.os_vif_util [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b3:24:60,bridge_name='br-int',has_traffic_filtering=True,id=e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2fe6b5b-61') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.647 2 DEBUG os_vif [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:24:60,bridge_name='br-int',has_traffic_filtering=True,id=e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2fe6b5b-61') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.649 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape2fe6b5b-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.654 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=a49b2a6d-6af3-4574-b5a4-26cab8335187) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.657 2 INFO os_vif [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:24:60,bridge_name='br-int',has_traffic_filtering=True,id=e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2fe6b5b-61')
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.658 2 INFO nova.virt.libvirt.driver [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Deleting instance files /var/lib/nova/instances/33416544-25f6-4028-a159-162014dbffea_del
Oct 08 16:40:34 compute-0 nova_compute[117413]: 2025-10-08 16:40:34.658 2 INFO nova.virt.libvirt.driver [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Deletion of /var/lib/nova/instances/33416544-25f6-4028-a159-162014dbffea_del complete
Oct 08 16:40:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b7c1db85494b784c01398008fc48864ac8642229ba347020f85b1db5b6540da-userdata-shm.mount: Deactivated successfully.
Oct 08 16:40:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e7d51457f4d5d37f7805b6595cdc882595939fa00e58ca76381279713de8ba9-merged.mount: Deactivated successfully.
Oct 08 16:40:35 compute-0 podman[152139]: 2025-10-08 16:40:35.135247539 +0000 UTC m=+1.063835358 container cleanup 1b7c1db85494b784c01398008fc48864ac8642229ba347020f85b1db5b6540da (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Oct 08 16:40:35 compute-0 systemd[1]: libpod-conmon-1b7c1db85494b784c01398008fc48864ac8642229ba347020f85b1db5b6540da.scope: Deactivated successfully.
Oct 08 16:40:35 compute-0 nova_compute[117413]: 2025-10-08 16:40:35.170 2 INFO nova.compute.manager [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Took 1.30 seconds to destroy the instance on the hypervisor.
Oct 08 16:40:35 compute-0 nova_compute[117413]: 2025-10-08 16:40:35.171 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:40:35 compute-0 nova_compute[117413]: 2025-10-08 16:40:35.171 2 DEBUG nova.compute.manager [-] [instance: 33416544-25f6-4028-a159-162014dbffea] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:40:35 compute-0 nova_compute[117413]: 2025-10-08 16:40:35.171 2 DEBUG nova.network.neutron [-] [instance: 33416544-25f6-4028-a159-162014dbffea] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:40:35 compute-0 nova_compute[117413]: 2025-10-08 16:40:35.171 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:40:35 compute-0 nova_compute[117413]: 2025-10-08 16:40:35.254 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:40:35 compute-0 podman[152167]: 2025-10-08 16:40:35.763434808 +0000 UTC m=+1.442019596 container remove 1b7c1db85494b784c01398008fc48864ac8642229ba347020f85b1db5b6540da (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 08 16:40:35 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:35.775 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[5aeefc9f-43a7-4aee-b6cf-030c4f9d6c81]: (4, ("Wed Oct  8 04:40:33 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f (1b7c1db85494b784c01398008fc48864ac8642229ba347020f85b1db5b6540da)\n1b7c1db85494b784c01398008fc48864ac8642229ba347020f85b1db5b6540da\nWed Oct  8 04:40:34 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f (1b7c1db85494b784c01398008fc48864ac8642229ba347020f85b1db5b6540da)\n1b7c1db85494b784c01398008fc48864ac8642229ba347020f85b1db5b6540da\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:35 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:35.778 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[bc51ec84-3fed-4ad2-90e6-0a2ca0f271dc]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:35 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:35.780 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:40:35 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:35.781 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[08c33ea0-2a0e-417c-a58c-f56dd7a1b5c7]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:35 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:35.782 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2742327d-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:40:35 compute-0 nova_compute[117413]: 2025-10-08 16:40:35.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:35 compute-0 nova_compute[117413]: 2025-10-08 16:40:35.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:35 compute-0 kernel: tap2742327d-20: left promiscuous mode
Oct 08 16:40:35 compute-0 nova_compute[117413]: 2025-10-08 16:40:35.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:35 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:35.802 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[f109c71f-9f76-4257-8111-1b7002ac9d8c]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:35 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:35.826 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[18853b22-a4f9-48c3-9fdd-8261d13ba133]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:35 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:35.827 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[fed044b1-1bc9-4f5f-aa35-43fccaca3371]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:35 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:35.847 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[aa901a8e-3645-4298-9d26-5356d4b1d6b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 287201, 'reachable_time': 31334, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 152195, 'error': None, 'target': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:35 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:35.850 28777 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2742327d-2338-460e-952e-6446bba2b03f deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 08 16:40:35 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:35.850 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[19e292e5-e2b8-43aa-a144-91982cb2430e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:40:35 compute-0 systemd[1]: run-netns-ovnmeta\x2d2742327d\x2d2338\x2d460e\x2d952e\x2d6446bba2b03f.mount: Deactivated successfully.
Oct 08 16:40:35 compute-0 podman[152187]: 2025-10-08 16:40:35.896812487 +0000 UTC m=+0.067249341 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=multipathd, org.label-schema.build-date=20251007)
Oct 08 16:40:35 compute-0 nova_compute[117413]: 2025-10-08 16:40:35.988 2 DEBUG nova.network.neutron [-] [instance: 33416544-25f6-4028-a159-162014dbffea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:40:36 compute-0 nova_compute[117413]: 2025-10-08 16:40:36.058 2 DEBUG nova.compute.manager [req-892e1bc7-3c1b-468a-9fed-6bb878f22820 req-0ae2a37a-d632-44e9-828d-d234adaee1db c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Received event network-vif-deleted-e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:40:36 compute-0 nova_compute[117413]: 2025-10-08 16:40:36.388 2 DEBUG nova.compute.manager [req-437bc58f-0dc4-4f5c-b9b8-7c86712a9ca3 req-f5fde1f2-10b8-4567-88f4-b05a7c8d1c39 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Received event network-vif-unplugged-e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:40:36 compute-0 nova_compute[117413]: 2025-10-08 16:40:36.388 2 DEBUG oslo_concurrency.lockutils [req-437bc58f-0dc4-4f5c-b9b8-7c86712a9ca3 req-f5fde1f2-10b8-4567-88f4-b05a7c8d1c39 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "33416544-25f6-4028-a159-162014dbffea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:40:36 compute-0 nova_compute[117413]: 2025-10-08 16:40:36.388 2 DEBUG oslo_concurrency.lockutils [req-437bc58f-0dc4-4f5c-b9b8-7c86712a9ca3 req-f5fde1f2-10b8-4567-88f4-b05a7c8d1c39 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "33416544-25f6-4028-a159-162014dbffea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:40:36 compute-0 nova_compute[117413]: 2025-10-08 16:40:36.389 2 DEBUG oslo_concurrency.lockutils [req-437bc58f-0dc4-4f5c-b9b8-7c86712a9ca3 req-f5fde1f2-10b8-4567-88f4-b05a7c8d1c39 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "33416544-25f6-4028-a159-162014dbffea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:40:36 compute-0 nova_compute[117413]: 2025-10-08 16:40:36.389 2 DEBUG nova.compute.manager [req-437bc58f-0dc4-4f5c-b9b8-7c86712a9ca3 req-f5fde1f2-10b8-4567-88f4-b05a7c8d1c39 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] No waiting events found dispatching network-vif-unplugged-e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:40:36 compute-0 nova_compute[117413]: 2025-10-08 16:40:36.389 2 DEBUG nova.compute.manager [req-437bc58f-0dc4-4f5c-b9b8-7c86712a9ca3 req-f5fde1f2-10b8-4567-88f4-b05a7c8d1c39 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 33416544-25f6-4028-a159-162014dbffea] Received event network-vif-unplugged-e2fe6b5b-61fc-4aff-9a5b-9d5d7f78bd0d for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:40:36 compute-0 nova_compute[117413]: 2025-10-08 16:40:36.501 2 INFO nova.compute.manager [-] [instance: 33416544-25f6-4028-a159-162014dbffea] Took 1.33 seconds to deallocate network for instance.
Oct 08 16:40:36 compute-0 nova_compute[117413]: 2025-10-08 16:40:36.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:37 compute-0 nova_compute[117413]: 2025-10-08 16:40:37.028 2 DEBUG oslo_concurrency.lockutils [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:40:37 compute-0 nova_compute[117413]: 2025-10-08 16:40:37.029 2 DEBUG oslo_concurrency.lockutils [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:40:37 compute-0 nova_compute[117413]: 2025-10-08 16:40:37.036 2 DEBUG oslo_concurrency.lockutils [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.008s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:40:37 compute-0 nova_compute[117413]: 2025-10-08 16:40:37.068 2 INFO nova.scheduler.client.report [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Deleted allocations for instance 33416544-25f6-4028-a159-162014dbffea
Oct 08 16:40:38 compute-0 nova_compute[117413]: 2025-10-08 16:40:38.107 2 DEBUG oslo_concurrency.lockutils [None req-8542d143-5249-4287-863f-e59b7a8ec629 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "33416544-25f6-4028-a159-162014dbffea" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.764s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:40:39 compute-0 nova_compute[117413]: 2025-10-08 16:40:39.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:41 compute-0 podman[152210]: 2025-10-08 16:40:41.454718731 +0000 UTC m=+0.065727408 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, release=1755695350, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git)
Oct 08 16:40:41 compute-0 nova_compute[117413]: 2025-10-08 16:40:41.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:41.932 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:40:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:41.932 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:40:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:41.932 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:40:44 compute-0 nova_compute[117413]: 2025-10-08 16:40:44.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:45.588 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:40:45 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:45.589 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:40:45 compute-0 nova_compute[117413]: 2025-10-08 16:40:45.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:46 compute-0 nova_compute[117413]: 2025-10-08 16:40:46.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:48 compute-0 podman[152236]: 2025-10-08 16:40:48.459961765 +0000 UTC m=+0.063042030 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 08 16:40:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:40:49.591 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:40:49 compute-0 nova_compute[117413]: 2025-10-08 16:40:49.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:50 compute-0 podman[152256]: 2025-10-08 16:40:50.446977968 +0000 UTC m=+0.055277264 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:40:51 compute-0 nova_compute[117413]: 2025-10-08 16:40:51.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:54 compute-0 nova_compute[117413]: 2025-10-08 16:40:54.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:55 compute-0 podman[152276]: 2025-10-08 16:40:55.464405106 +0000 UTC m=+0.076400087 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:40:55 compute-0 podman[152277]: 2025-10-08 16:40:55.476002582 +0000 UTC m=+0.080379272 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 08 16:40:56 compute-0 nova_compute[117413]: 2025-10-08 16:40:56.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:59 compute-0 nova_compute[117413]: 2025-10-08 16:40:59.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:40:59 compute-0 podman[127881]: time="2025-10-08T16:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:40:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:40:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3036 "" "Go-http-client/1.1"
Oct 08 16:41:01 compute-0 openstack_network_exporter[130039]: ERROR   16:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:41:01 compute-0 openstack_network_exporter[130039]: ERROR   16:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:41:01 compute-0 openstack_network_exporter[130039]: ERROR   16:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:41:01 compute-0 openstack_network_exporter[130039]: ERROR   16:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:41:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:41:01 compute-0 openstack_network_exporter[130039]: ERROR   16:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:41:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:41:01 compute-0 nova_compute[117413]: 2025-10-08 16:41:01.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:04 compute-0 nova_compute[117413]: 2025-10-08 16:41:04.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:06 compute-0 podman[152326]: 2025-10-08 16:41:06.449655898 +0000 UTC m=+0.054901123 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4)
Oct 08 16:41:06 compute-0 nova_compute[117413]: 2025-10-08 16:41:06.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:09 compute-0 nova_compute[117413]: 2025-10-08 16:41:09.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:11 compute-0 nova_compute[117413]: 2025-10-08 16:41:11.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:12 compute-0 nova_compute[117413]: 2025-10-08 16:41:12.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:41:12 compute-0 podman[152347]: 2025-10-08 16:41:12.451910211 +0000 UTC m=+0.064061509 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-type=git, architecture=x86_64, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.openshift.expose-services=)
Oct 08 16:41:12 compute-0 nova_compute[117413]: 2025-10-08 16:41:12.876 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:41:12 compute-0 nova_compute[117413]: 2025-10-08 16:41:12.877 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:41:12 compute-0 nova_compute[117413]: 2025-10-08 16:41:12.877 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:41:12 compute-0 nova_compute[117413]: 2025-10-08 16:41:12.877 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:41:13 compute-0 nova_compute[117413]: 2025-10-08 16:41:13.022 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:41:13 compute-0 nova_compute[117413]: 2025-10-08 16:41:13.023 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:41:13 compute-0 nova_compute[117413]: 2025-10-08 16:41:13.041 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.018s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:41:13 compute-0 nova_compute[117413]: 2025-10-08 16:41:13.042 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6180MB free_disk=73.24971771240234GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:41:13 compute-0 nova_compute[117413]: 2025-10-08 16:41:13.043 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:41:13 compute-0 nova_compute[117413]: 2025-10-08 16:41:13.044 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:41:14 compute-0 nova_compute[117413]: 2025-10-08 16:41:14.100 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:41:14 compute-0 nova_compute[117413]: 2025-10-08 16:41:14.100 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:41:13 up 49 min,  0 user,  load average: 0.29, 0.18, 0.18\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:41:14 compute-0 nova_compute[117413]: 2025-10-08 16:41:14.132 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:41:14 compute-0 nova_compute[117413]: 2025-10-08 16:41:14.640 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:41:14 compute-0 nova_compute[117413]: 2025-10-08 16:41:14.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:15 compute-0 nova_compute[117413]: 2025-10-08 16:41:15.154 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:41:15 compute-0 nova_compute[117413]: 2025-10-08 16:41:15.154 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.111s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:41:16 compute-0 nova_compute[117413]: 2025-10-08 16:41:16.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:17 compute-0 nova_compute[117413]: 2025-10-08 16:41:17.155 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:41:17 compute-0 nova_compute[117413]: 2025-10-08 16:41:17.155 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:41:19 compute-0 nova_compute[117413]: 2025-10-08 16:41:19.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:41:19 compute-0 nova_compute[117413]: 2025-10-08 16:41:19.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:41:19 compute-0 nova_compute[117413]: 2025-10-08 16:41:19.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:41:19 compute-0 podman[152370]: 2025-10-08 16:41:19.451501371 +0000 UTC m=+0.058495078 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, container_name=iscsid, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 08 16:41:19 compute-0 nova_compute[117413]: 2025-10-08 16:41:19.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:20 compute-0 nova_compute[117413]: 2025-10-08 16:41:20.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:41:21 compute-0 podman[152390]: 2025-10-08 16:41:21.463479796 +0000 UTC m=+0.061333149 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 08 16:41:21 compute-0 nova_compute[117413]: 2025-10-08 16:41:21.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:24 compute-0 nova_compute[117413]: 2025-10-08 16:41:24.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:41:24 compute-0 nova_compute[117413]: 2025-10-08 16:41:24.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:26 compute-0 nova_compute[117413]: 2025-10-08 16:41:26.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:41:26 compute-0 podman[152409]: 2025-10-08 16:41:26.481067639 +0000 UTC m=+0.071826504 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:41:26 compute-0 podman[152410]: 2025-10-08 16:41:26.50107356 +0000 UTC m=+0.096473980 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 08 16:41:26 compute-0 nova_compute[117413]: 2025-10-08 16:41:26.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:29 compute-0 nova_compute[117413]: 2025-10-08 16:41:29.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:29 compute-0 podman[127881]: time="2025-10-08T16:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:41:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:41:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3037 "" "Go-http-client/1.1"
Oct 08 16:41:31 compute-0 openstack_network_exporter[130039]: ERROR   16:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:41:31 compute-0 openstack_network_exporter[130039]: ERROR   16:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:41:31 compute-0 openstack_network_exporter[130039]: ERROR   16:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:41:31 compute-0 openstack_network_exporter[130039]: ERROR   16:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:41:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:41:31 compute-0 openstack_network_exporter[130039]: ERROR   16:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:41:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:41:31 compute-0 nova_compute[117413]: 2025-10-08 16:41:31.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:34 compute-0 nova_compute[117413]: 2025-10-08 16:41:34.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:36 compute-0 nova_compute[117413]: 2025-10-08 16:41:36.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:37 compute-0 podman[152459]: 2025-10-08 16:41:37.488668831 +0000 UTC m=+0.089873913 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251007, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 08 16:41:39 compute-0 nova_compute[117413]: 2025-10-08 16:41:39.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:41 compute-0 nova_compute[117413]: 2025-10-08 16:41:41.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:41:41.933 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:41:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:41:41.933 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:41:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:41:41.934 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:41:43 compute-0 podman[152480]: 2025-10-08 16:41:43.480496329 +0000 UTC m=+0.082242855 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, vendor=Red Hat, Inc., architecture=x86_64, name=ubi9-minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container)
Oct 08 16:41:44 compute-0 nova_compute[117413]: 2025-10-08 16:41:44.423 2 DEBUG nova.virt.libvirt.driver [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Creating tmpfile /var/lib/nova/instances/tmpq9kplm9u to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10944
Oct 08 16:41:44 compute-0 nova_compute[117413]: 2025-10-08 16:41:44.424 2 WARNING neutronclient.v2_0.client [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:41:44 compute-0 nova_compute[117413]: 2025-10-08 16:41:44.446 2 DEBUG nova.virt.libvirt.driver [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Creating tmpfile /var/lib/nova/instances/tmpdmceyg9d to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10944
Oct 08 16:41:44 compute-0 nova_compute[117413]: 2025-10-08 16:41:44.446 2 WARNING neutronclient.v2_0.client [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:41:44 compute-0 nova_compute[117413]: 2025-10-08 16:41:44.511 2 DEBUG nova.compute.manager [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpq9kplm9u',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.12/site-packages/nova/compute/manager.py:9086
Oct 08 16:41:44 compute-0 nova_compute[117413]: 2025-10-08 16:41:44.534 2 DEBUG nova.compute.manager [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpdmceyg9d',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.12/site-packages/nova/compute/manager.py:9086
Oct 08 16:41:44 compute-0 ovn_controller[19768]: 2025-10-08T16:41:44Z|00241|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 08 16:41:44 compute-0 nova_compute[117413]: 2025-10-08 16:41:44.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:46 compute-0 nova_compute[117413]: 2025-10-08 16:41:46.541 2 WARNING neutronclient.v2_0.client [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:41:46 compute-0 nova_compute[117413]: 2025-10-08 16:41:46.561 2 WARNING neutronclient.v2_0.client [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:41:46 compute-0 nova_compute[117413]: 2025-10-08 16:41:46.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:49 compute-0 nova_compute[117413]: 2025-10-08 16:41:49.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:50 compute-0 podman[152502]: 2025-10-08 16:41:50.447443721 +0000 UTC m=+0.056861705 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_id=iscsid, container_name=iscsid, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:41:51 compute-0 nova_compute[117413]: 2025-10-08 16:41:51.125 2 DEBUG nova.compute.manager [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpdmceyg9d',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='e1327151-bae2-40dd-a12d-90799e91c86d',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9311
Oct 08 16:41:51 compute-0 nova_compute[117413]: 2025-10-08 16:41:51.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:52 compute-0 nova_compute[117413]: 2025-10-08 16:41:52.138 2 DEBUG oslo_concurrency.lockutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-e1327151-bae2-40dd-a12d-90799e91c86d" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:41:52 compute-0 nova_compute[117413]: 2025-10-08 16:41:52.139 2 DEBUG oslo_concurrency.lockutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-e1327151-bae2-40dd-a12d-90799e91c86d" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:41:52 compute-0 nova_compute[117413]: 2025-10-08 16:41:52.139 2 DEBUG nova.network.neutron [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:41:52 compute-0 podman[152524]: 2025-10-08 16:41:52.492341229 +0000 UTC m=+0.088855425 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct 08 16:41:52 compute-0 nova_compute[117413]: 2025-10-08 16:41:52.646 2 WARNING neutronclient.v2_0.client [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:41:53 compute-0 nova_compute[117413]: 2025-10-08 16:41:53.594 2 WARNING neutronclient.v2_0.client [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:41:53 compute-0 nova_compute[117413]: 2025-10-08 16:41:53.935 2 DEBUG nova.network.neutron [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Updating instance_info_cache with network_info: [{"id": "83237a56-7017-4670-921f-7758d20aaae4", "address": "fa:16:3e:3d:cd:c4", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83237a56-70", "ovs_interfaceid": "83237a56-7017-4670-921f-7758d20aaae4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:41:54 compute-0 nova_compute[117413]: 2025-10-08 16:41:54.441 2 DEBUG oslo_concurrency.lockutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-e1327151-bae2-40dd-a12d-90799e91c86d" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:41:54 compute-0 nova_compute[117413]: 2025-10-08 16:41:54.457 2 DEBUG nova.virt.libvirt.driver [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpdmceyg9d',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='e1327151-bae2-40dd-a12d-90799e91c86d',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11737
Oct 08 16:41:54 compute-0 nova_compute[117413]: 2025-10-08 16:41:54.458 2 DEBUG nova.virt.libvirt.driver [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Creating instance directory: /var/lib/nova/instances/e1327151-bae2-40dd-a12d-90799e91c86d pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11750
Oct 08 16:41:54 compute-0 nova_compute[117413]: 2025-10-08 16:41:54.458 2 DEBUG nova.virt.libvirt.driver [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Creating disk.info with the contents: {'/var/lib/nova/instances/e1327151-bae2-40dd-a12d-90799e91c86d/disk': 'qcow2', '/var/lib/nova/instances/e1327151-bae2-40dd-a12d-90799e91c86d/disk.config': 'raw'} pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11764
Oct 08 16:41:54 compute-0 nova_compute[117413]: 2025-10-08 16:41:54.458 2 DEBUG nova.virt.libvirt.driver [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11774
Oct 08 16:41:54 compute-0 nova_compute[117413]: 2025-10-08 16:41:54.459 2 DEBUG nova.objects.instance [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e1327151-bae2-40dd-a12d-90799e91c86d obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:41:54 compute-0 nova_compute[117413]: 2025-10-08 16:41:54.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:54 compute-0 nova_compute[117413]: 2025-10-08 16:41:54.965 2 DEBUG oslo_utils.imageutils.format_inspector [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:41:54 compute-0 nova_compute[117413]: 2025-10-08 16:41:54.969 2 DEBUG oslo_utils.imageutils.format_inspector [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:41:54 compute-0 nova_compute[117413]: 2025-10-08 16:41:54.970 2 DEBUG oslo_concurrency.processutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.025 2 DEBUG oslo_concurrency.processutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.026 2 DEBUG oslo_concurrency.lockutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.027 2 DEBUG oslo_concurrency.lockutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.028 2 DEBUG oslo_utils.imageutils.format_inspector [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.030 2 DEBUG oslo_utils.imageutils.format_inspector [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.031 2 DEBUG oslo_concurrency.processutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.109 2 DEBUG oslo_concurrency.processutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.111 2 DEBUG oslo_concurrency.processutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/e1327151-bae2-40dd-a12d-90799e91c86d/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.152 2 DEBUG oslo_concurrency.processutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/e1327151-bae2-40dd-a12d-90799e91c86d/disk 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.154 2 DEBUG oslo_concurrency.lockutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.127s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.154 2 DEBUG oslo_concurrency.processutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.205 2 DEBUG oslo_concurrency.processutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.206 2 DEBUG nova.virt.disk.api [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Checking if we can resize image /var/lib/nova/instances/e1327151-bae2-40dd-a12d-90799e91c86d/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.207 2 DEBUG oslo_concurrency.processutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e1327151-bae2-40dd-a12d-90799e91c86d/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.264 2 DEBUG oslo_concurrency.processutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e1327151-bae2-40dd-a12d-90799e91c86d/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.265 2 DEBUG nova.virt.disk.api [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Cannot resize image /var/lib/nova/instances/e1327151-bae2-40dd-a12d-90799e91c86d/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.266 2 DEBUG nova.objects.instance [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'migration_context' on Instance uuid e1327151-bae2-40dd-a12d-90799e91c86d obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.773 2 DEBUG nova.objects.base [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Object Instance<e1327151-bae2-40dd-a12d-90799e91c86d> lazy-loaded attributes: trusted_certs,migration_context wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.774 2 DEBUG oslo_concurrency.processutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/e1327151-bae2-40dd-a12d-90799e91c86d/disk.config 497664 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.807 2 DEBUG oslo_concurrency.processutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/e1327151-bae2-40dd-a12d-90799e91c86d/disk.config 497664" returned: 0 in 0.032s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.808 2 DEBUG nova.virt.libvirt.driver [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11704
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.809 2 DEBUG nova.virt.libvirt.vif [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-08T16:40:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-1914151859',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-1914151',id=28,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:40:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a137143db2f84a9f89a9bfc5d20558d0',ramdisk_id='',reservation_id='r-a1uf3mhk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-817748354',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-817748354-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:40:55Z,user_data=None,user_id='aa1d2ca0056143d982599cc1b9f8587d',uuid=e1327151-bae2-40dd-a12d-90799e91c86d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "83237a56-7017-4670-921f-7758d20aaae4", "address": "fa:16:3e:3d:cd:c4", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap83237a56-70", "ovs_interfaceid": "83237a56-7017-4670-921f-7758d20aaae4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.810 2 DEBUG nova.network.os_vif_util [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converting VIF {"id": "83237a56-7017-4670-921f-7758d20aaae4", "address": "fa:16:3e:3d:cd:c4", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap83237a56-70", "ovs_interfaceid": "83237a56-7017-4670-921f-7758d20aaae4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.811 2 DEBUG nova.network.os_vif_util [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:cd:c4,bridge_name='br-int',has_traffic_filtering=True,id=83237a56-7017-4670-921f-7758d20aaae4,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83237a56-70') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.811 2 DEBUG os_vif [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:cd:c4,bridge_name='br-int',has_traffic_filtering=True,id=83237a56-7017-4670-921f-7758d20aaae4,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83237a56-70') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.813 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.813 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.814 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '5d5010f7-fbcf-5d5d-aa58-adce150e6d65', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.820 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap83237a56-70, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.821 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap83237a56-70, col_values=(('qos', UUID('50d1a112-f0fd-41b0-b317-a0af9c7d7432')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.821 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap83237a56-70, col_values=(('external_ids', {'iface-id': '83237a56-7017-4670-921f-7758d20aaae4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:cd:c4', 'vm-uuid': 'e1327151-bae2-40dd-a12d-90799e91c86d'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:55 compute-0 NetworkManager[1034]: <info>  [1759941715.8247] manager: (tap83237a56-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.833 2 INFO os_vif [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:cd:c4,bridge_name='br-int',has_traffic_filtering=True,id=83237a56-7017-4670-921f-7758d20aaae4,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83237a56-70')
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.834 2 DEBUG nova.virt.libvirt.driver [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11851
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.834 2 DEBUG nova.compute.manager [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpdmceyg9d',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='e1327151-bae2-40dd-a12d-90799e91c86d',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9377
Oct 08 16:41:55 compute-0 nova_compute[117413]: 2025-10-08 16:41:55.835 2 WARNING neutronclient.v2_0.client [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:41:56 compute-0 nova_compute[117413]: 2025-10-08 16:41:56.295 2 WARNING neutronclient.v2_0.client [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:41:56 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:41:56.592 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:41:56 compute-0 nova_compute[117413]: 2025-10-08 16:41:56.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:56 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:41:56.594 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:41:56 compute-0 nova_compute[117413]: 2025-10-08 16:41:56.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:41:57 compute-0 nova_compute[117413]: 2025-10-08 16:41:57.152 2 DEBUG nova.network.neutron [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Port 83237a56-7017-4670-921f-7758d20aaae4 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.12/site-packages/nova/network/neutron.py:356
Oct 08 16:41:57 compute-0 nova_compute[117413]: 2025-10-08 16:41:57.168 2 DEBUG nova.compute.manager [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpdmceyg9d',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='e1327151-bae2-40dd-a12d-90799e91c86d',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9443
Oct 08 16:41:57 compute-0 podman[152565]: 2025-10-08 16:41:57.462857185 +0000 UTC m=+0.070116696 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 08 16:41:57 compute-0 podman[152566]: 2025-10-08 16:41:57.516388303 +0000 UTC m=+0.106952374 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS)
Oct 08 16:41:59 compute-0 podman[127881]: time="2025-10-08T16:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:41:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:41:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3031 "" "Go-http-client/1.1"
Oct 08 16:42:00 compute-0 kernel: tap83237a56-70: entered promiscuous mode
Oct 08 16:42:00 compute-0 NetworkManager[1034]: <info>  [1759941720.4062] manager: (tap83237a56-70): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Oct 08 16:42:00 compute-0 ovn_controller[19768]: 2025-10-08T16:42:00Z|00242|binding|INFO|Claiming lport 83237a56-7017-4670-921f-7758d20aaae4 for this additional chassis.
Oct 08 16:42:00 compute-0 ovn_controller[19768]: 2025-10-08T16:42:00Z|00243|binding|INFO|83237a56-7017-4670-921f-7758d20aaae4: Claiming fa:16:3e:3d:cd:c4 10.100.0.13
Oct 08 16:42:00 compute-0 nova_compute[117413]: 2025-10-08 16:42:00.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.419 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:cd:c4 10.100.0.13'], port_security=['fa:16:3e:3d:cd:c4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e1327151-bae2-40dd-a12d-90799e91c86d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2742327d-2338-460e-952e-6446bba2b03f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a137143db2f84a9f89a9bfc5d20558d0', 'neutron:revision_number': '10', 'neutron:security_group_ids': '90e43186-0cc8-407e-ae2a-ee1daa701739', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d14e964-e6b6-456f-bb61-aa39c15f6e3f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=83237a56-7017-4670-921f-7758d20aaae4) old=Port_Binding(additional_chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.420 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 83237a56-7017-4670-921f-7758d20aaae4 in datapath 2742327d-2338-460e-952e-6446bba2b03f unbound from our chassis
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.422 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2742327d-2338-460e-952e-6446bba2b03f
Oct 08 16:42:00 compute-0 ovn_controller[19768]: 2025-10-08T16:42:00Z|00244|binding|INFO|Setting lport 83237a56-7017-4670-921f-7758d20aaae4 ovn-installed in OVS
Oct 08 16:42:00 compute-0 systemd-udevd[152626]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:42:00 compute-0 nova_compute[117413]: 2025-10-08 16:42:00.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.447 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[22aafb04-1c06-454c-999f-afbbe6fc5fae]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.449 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2742327d-21 in ovnmeta-2742327d-2338-460e-952e-6446bba2b03f namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 08 16:42:00 compute-0 NetworkManager[1034]: <info>  [1759941720.4624] device (tap83237a56-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:42:00 compute-0 NetworkManager[1034]: <info>  [1759941720.4634] device (tap83237a56-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:42:00 compute-0 nova_compute[117413]: 2025-10-08 16:42:00.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.480 139805 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2742327d-20 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.481 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[02569893-74e4-48bd-8d3c-442c6680afd9]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.482 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[2e323b73-c096-40e9-b13a-a880d5d8f7d8]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:00 compute-0 systemd-machined[77548]: New machine qemu-21-instance-0000001c.
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.498 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[2d4d5b1c-37cb-406a-b81e-f26df302f957]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.516 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[5e816f0a-4cad-4141-9f8e-744c061e69b6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:00 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-0000001c.
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.555 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[426c1ae6-800c-4ef3-9a60-1cc367c72feb]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.560 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[c90220cf-664f-4f3f-826c-0fdc368745cd]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:00 compute-0 NetworkManager[1034]: <info>  [1759941720.5616] manager: (tap2742327d-20): new Veth device (/org/freedesktop/NetworkManager/Devices/88)
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.595 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[dae2e6c0-1bbc-45ee-ade7-c0157e21ab1c]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.598 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[84c15a69-c0c4-4402-86ad-890811705b84]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:00 compute-0 NetworkManager[1034]: <info>  [1759941720.6271] device (tap2742327d-20): carrier: link connected
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.636 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[a5509718-4846-45f9-906c-cd8ebc4a28e5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.661 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[80a03520-2465-4ccd-9d53-e1d09a2fbce2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2742327d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:54:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 300931, 'reachable_time': 34520, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 152661, 'error': None, 'target': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.680 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[142499c7-2945-4857-9189-9a67bbebf3f7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe83:545d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 300931, 'tstamp': 300931}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 152663, 'error': None, 'target': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.697 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[349cfd57-b3a9-4917-8832-6c34f6b2b2db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2742327d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:54:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 300931, 'reachable_time': 34520, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 152665, 'error': None, 'target': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.731 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca0cf3d-d4e1-4b2b-8b1b-277b67ae1e50]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.805 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[dba11291-a60e-4172-96ee-e17f10778078]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.806 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2742327d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.806 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.806 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2742327d-20, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:42:00 compute-0 nova_compute[117413]: 2025-10-08 16:42:00.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:00 compute-0 NetworkManager[1034]: <info>  [1759941720.8097] manager: (tap2742327d-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Oct 08 16:42:00 compute-0 kernel: tap2742327d-20: entered promiscuous mode
Oct 08 16:42:00 compute-0 nova_compute[117413]: 2025-10-08 16:42:00.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.815 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2742327d-20, col_values=(('external_ids', {'iface-id': '8a2f4b59-05b7-414e-b353-f44ee56d820a'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:42:00 compute-0 nova_compute[117413]: 2025-10-08 16:42:00.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:00 compute-0 ovn_controller[19768]: 2025-10-08T16:42:00Z|00245|binding|INFO|Releasing lport 8a2f4b59-05b7-414e-b353-f44ee56d820a from this chassis (sb_readonly=0)
Oct 08 16:42:00 compute-0 nova_compute[117413]: 2025-10-08 16:42:00.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:00 compute-0 nova_compute[117413]: 2025-10-08 16:42:00.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.833 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[68bb0f39-8868-433e-8155-fe50effef0c9]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.834 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.834 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.834 28633 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for 2742327d-2338-460e-952e-6446bba2b03f disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.834 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.835 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[1ca582d4-b6d1-437b-ad8b-e7a3a337ddb1]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.835 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.835 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[31d8b0a4-0ee8-4475-9ce5-e2a59d241186]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.835 28633 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: global
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     log         /dev/log local0 debug
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     log-tag     haproxy-metadata-proxy-2742327d-2338-460e-952e-6446bba2b03f
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     user        root
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     group       root
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     maxconn     1024
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     pidfile     /var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     daemon
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: defaults
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     log global
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     mode http
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     option httplog
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     option dontlognull
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     option http-server-close
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     option forwardfor
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     retries                 3
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     timeout http-request    30s
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     timeout connect         30s
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     timeout client          32s
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     timeout server          32s
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     timeout http-keep-alive 30s
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: listen listener
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     bind 169.254.169.254:80
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     server metadata /var/lib/neutron/metadata_proxy
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:     http-request add-header X-OVN-Network-ID 2742327d-2338-460e-952e-6446bba2b03f
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 08 16:42:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:00.836 28633 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'env', 'PROCESS_TAG=haproxy-2742327d-2338-460e-952e-6446bba2b03f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2742327d-2338-460e-952e-6446bba2b03f.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 08 16:42:01 compute-0 podman[152703]: 2025-10-08 16:42:01.248992534 +0000 UTC m=+0.053351804 container create ec181658c253e8e9ea9f4e82766b9a98491b4076d1c11fb73045ccfebf918b99 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 08 16:42:01 compute-0 systemd[1]: Started libpod-conmon-ec181658c253e8e9ea9f4e82766b9a98491b4076d1c11fb73045ccfebf918b99.scope.
Oct 08 16:42:01 compute-0 podman[152703]: 2025-10-08 16:42:01.220345041 +0000 UTC m=+0.024704341 image pull 1b705be0a2473f9551d4f3571c1e8fc1b0bd84e013684239de53078e70a4b6e3 38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 08 16:42:01 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:42:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e554d77452114b2da6e77d40926f8f6cec812c2f8046e87a3d5217df29e8a2b4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 08 16:42:01 compute-0 podman[152703]: 2025-10-08 16:42:01.337127387 +0000 UTC m=+0.141486677 container init ec181658c253e8e9ea9f4e82766b9a98491b4076d1c11fb73045ccfebf918b99 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 08 16:42:01 compute-0 podman[152703]: 2025-10-08 16:42:01.342699277 +0000 UTC m=+0.147058557 container start ec181658c253e8e9ea9f4e82766b9a98491b4076d1c11fb73045ccfebf918b99 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:42:01 compute-0 neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f[152719]: [NOTICE]   (152723) : New worker (152725) forked
Oct 08 16:42:01 compute-0 neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f[152719]: [NOTICE]   (152723) : Loading success.
Oct 08 16:42:01 compute-0 openstack_network_exporter[130039]: ERROR   16:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:42:01 compute-0 openstack_network_exporter[130039]: ERROR   16:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:42:01 compute-0 openstack_network_exporter[130039]: ERROR   16:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:42:01 compute-0 openstack_network_exporter[130039]: ERROR   16:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:42:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:42:01 compute-0 openstack_network_exporter[130039]: ERROR   16:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:42:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:42:01 compute-0 nova_compute[117413]: 2025-10-08 16:42:01.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:03 compute-0 ovn_controller[19768]: 2025-10-08T16:42:03Z|00246|binding|INFO|Claiming lport 83237a56-7017-4670-921f-7758d20aaae4 for this chassis.
Oct 08 16:42:03 compute-0 ovn_controller[19768]: 2025-10-08T16:42:03Z|00247|binding|INFO|83237a56-7017-4670-921f-7758d20aaae4: Claiming fa:16:3e:3d:cd:c4 10.100.0.13
Oct 08 16:42:03 compute-0 ovn_controller[19768]: 2025-10-08T16:42:03Z|00248|binding|INFO|Setting lport 83237a56-7017-4670-921f-7758d20aaae4 up in Southbound
Oct 08 16:42:04 compute-0 nova_compute[117413]: 2025-10-08 16:42:04.150 2 INFO nova.compute.manager [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Post operation of migration started
Oct 08 16:42:04 compute-0 nova_compute[117413]: 2025-10-08 16:42:04.150 2 WARNING neutronclient.v2_0.client [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:04 compute-0 nova_compute[117413]: 2025-10-08 16:42:04.274 2 WARNING neutronclient.v2_0.client [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:04 compute-0 nova_compute[117413]: 2025-10-08 16:42:04.274 2 WARNING neutronclient.v2_0.client [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:04 compute-0 nova_compute[117413]: 2025-10-08 16:42:04.369 2 DEBUG oslo_concurrency.lockutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-e1327151-bae2-40dd-a12d-90799e91c86d" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:42:04 compute-0 nova_compute[117413]: 2025-10-08 16:42:04.369 2 DEBUG oslo_concurrency.lockutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-e1327151-bae2-40dd-a12d-90799e91c86d" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:42:04 compute-0 nova_compute[117413]: 2025-10-08 16:42:04.369 2 DEBUG nova.network.neutron [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:42:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:04.595 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:42:04 compute-0 nova_compute[117413]: 2025-10-08 16:42:04.875 2 WARNING neutronclient.v2_0.client [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:05 compute-0 nova_compute[117413]: 2025-10-08 16:42:05.211 2 WARNING neutronclient.v2_0.client [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:05 compute-0 nova_compute[117413]: 2025-10-08 16:42:05.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:06 compute-0 nova_compute[117413]: 2025-10-08 16:42:06.528 2 DEBUG nova.network.neutron [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Updating instance_info_cache with network_info: [{"id": "83237a56-7017-4670-921f-7758d20aaae4", "address": "fa:16:3e:3d:cd:c4", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83237a56-70", "ovs_interfaceid": "83237a56-7017-4670-921f-7758d20aaae4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:42:06 compute-0 nova_compute[117413]: 2025-10-08 16:42:06.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:07 compute-0 nova_compute[117413]: 2025-10-08 16:42:07.035 2 DEBUG oslo_concurrency.lockutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-e1327151-bae2-40dd-a12d-90799e91c86d" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:42:07 compute-0 nova_compute[117413]: 2025-10-08 16:42:07.556 2 DEBUG oslo_concurrency.lockutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:42:07 compute-0 nova_compute[117413]: 2025-10-08 16:42:07.556 2 DEBUG oslo_concurrency.lockutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:42:07 compute-0 nova_compute[117413]: 2025-10-08 16:42:07.556 2 DEBUG oslo_concurrency.lockutils [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:42:07 compute-0 nova_compute[117413]: 2025-10-08 16:42:07.562 2 INFO nova.virt.libvirt.driver [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 08 16:42:07 compute-0 virtqemud[117740]: Domain id=21 name='instance-0000001c' uuid=e1327151-bae2-40dd-a12d-90799e91c86d is tainted: custom-monitor
Oct 08 16:42:08 compute-0 podman[152747]: 2025-10-08 16:42:08.469614037 +0000 UTC m=+0.071190767 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest)
Oct 08 16:42:08 compute-0 nova_compute[117413]: 2025-10-08 16:42:08.572 2 INFO nova.virt.libvirt.driver [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 08 16:42:09 compute-0 nova_compute[117413]: 2025-10-08 16:42:09.579 2 INFO nova.virt.libvirt.driver [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 08 16:42:09 compute-0 nova_compute[117413]: 2025-10-08 16:42:09.587 2 DEBUG nova.compute.manager [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:42:10 compute-0 nova_compute[117413]: 2025-10-08 16:42:10.099 2 DEBUG nova.objects.instance [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Oct 08 16:42:10 compute-0 nova_compute[117413]: 2025-10-08 16:42:10.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:11 compute-0 nova_compute[117413]: 2025-10-08 16:42:11.130 2 WARNING neutronclient.v2_0.client [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:11 compute-0 nova_compute[117413]: 2025-10-08 16:42:11.224 2 WARNING neutronclient.v2_0.client [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:11 compute-0 nova_compute[117413]: 2025-10-08 16:42:11.225 2 WARNING neutronclient.v2_0.client [None req-38d0ef03-7900-4f71-b24f-ab6599611ec2 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:11 compute-0 nova_compute[117413]: 2025-10-08 16:42:11.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:14 compute-0 nova_compute[117413]: 2025-10-08 16:42:14.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:42:14 compute-0 podman[152767]: 2025-10-08 16:42:14.497155182 +0000 UTC m=+0.093969572 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 08 16:42:14 compute-0 nova_compute[117413]: 2025-10-08 16:42:14.879 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:42:14 compute-0 nova_compute[117413]: 2025-10-08 16:42:14.880 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:42:14 compute-0 nova_compute[117413]: 2025-10-08 16:42:14.881 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:42:14 compute-0 nova_compute[117413]: 2025-10-08 16:42:14.881 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:42:15 compute-0 nova_compute[117413]: 2025-10-08 16:42:15.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:15 compute-0 nova_compute[117413]: 2025-10-08 16:42:15.939 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e1327151-bae2-40dd-a12d-90799e91c86d/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:42:16 compute-0 nova_compute[117413]: 2025-10-08 16:42:16.013 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e1327151-bae2-40dd-a12d-90799e91c86d/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:42:16 compute-0 nova_compute[117413]: 2025-10-08 16:42:16.014 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e1327151-bae2-40dd-a12d-90799e91c86d/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:42:16 compute-0 nova_compute[117413]: 2025-10-08 16:42:16.103 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e1327151-bae2-40dd-a12d-90799e91c86d/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:42:16 compute-0 nova_compute[117413]: 2025-10-08 16:42:16.248 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:42:16 compute-0 nova_compute[117413]: 2025-10-08 16:42:16.249 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:42:16 compute-0 nova_compute[117413]: 2025-10-08 16:42:16.273 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.023s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:42:16 compute-0 nova_compute[117413]: 2025-10-08 16:42:16.273 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5995MB free_disk=73.22075653076172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:42:16 compute-0 nova_compute[117413]: 2025-10-08 16:42:16.274 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:42:16 compute-0 nova_compute[117413]: 2025-10-08 16:42:16.274 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:42:16 compute-0 nova_compute[117413]: 2025-10-08 16:42:16.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:17 compute-0 nova_compute[117413]: 2025-10-08 16:42:17.293 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Applying migration context for instance e1327151-bae2-40dd-a12d-90799e91c86d as it has an incoming, in-progress migration 67106e22-8526-47c5-8a2e-9cb7e5784983. Migration status is running _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1046
Oct 08 16:42:17 compute-0 nova_compute[117413]: 2025-10-08 16:42:17.293 2 DEBUG nova.objects.instance [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Oct 08 16:42:17 compute-0 nova_compute[117413]: 2025-10-08 16:42:17.801 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Migration for instance e2235ad2-cf92-464b-b586-698378cef322 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:979
Oct 08 16:42:17 compute-0 nova_compute[117413]: 2025-10-08 16:42:17.801 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Oct 08 16:42:18 compute-0 nova_compute[117413]: 2025-10-08 16:42:18.316 2 INFO nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] [instance: e2235ad2-cf92-464b-b586-698378cef322] Updating resource usage from migration ddd4af4f-76cc-478b-9c1d-3d5e6a88952d
Oct 08 16:42:18 compute-0 nova_compute[117413]: 2025-10-08 16:42:18.317 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] [instance: e2235ad2-cf92-464b-b586-698378cef322] Starting to track incoming migration ddd4af4f-76cc-478b-9c1d-3d5e6a88952d with flavor 43cd5d45-bd07-4889-a671-dd23291090c1 _update_usage_from_migration /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1536
Oct 08 16:42:19 compute-0 nova_compute[117413]: 2025-10-08 16:42:19.359 2 WARNING nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance e2235ad2-cf92-464b-b586-698378cef322 has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.
Oct 08 16:42:19 compute-0 nova_compute[117413]: 2025-10-08 16:42:19.360 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance e1327151-bae2-40dd-a12d-90799e91c86d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:42:19 compute-0 nova_compute[117413]: 2025-10-08 16:42:19.360 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:42:19 compute-0 nova_compute[117413]: 2025-10-08 16:42:19.360 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:42:16 up 50 min,  0 user,  load average: 0.17, 0.16, 0.17\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_a137143db2f84a9f89a9bfc5d20558d0': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:42:19 compute-0 nova_compute[117413]: 2025-10-08 16:42:19.417 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:42:19 compute-0 nova_compute[117413]: 2025-10-08 16:42:19.548 2 DEBUG nova.compute.manager [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpq9kplm9u',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='e2235ad2-cf92-464b-b586-698378cef322',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9311
Oct 08 16:42:19 compute-0 nova_compute[117413]: 2025-10-08 16:42:19.927 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:42:20 compute-0 nova_compute[117413]: 2025-10-08 16:42:20.439 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:42:20 compute-0 nova_compute[117413]: 2025-10-08 16:42:20.440 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.166s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:42:20 compute-0 nova_compute[117413]: 2025-10-08 16:42:20.562 2 DEBUG oslo_concurrency.lockutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-e2235ad2-cf92-464b-b586-698378cef322" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:42:20 compute-0 nova_compute[117413]: 2025-10-08 16:42:20.563 2 DEBUG oslo_concurrency.lockutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-e2235ad2-cf92-464b-b586-698378cef322" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:42:20 compute-0 nova_compute[117413]: 2025-10-08 16:42:20.563 2 DEBUG nova.network.neutron [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:42:20 compute-0 nova_compute[117413]: 2025-10-08 16:42:20.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:21 compute-0 nova_compute[117413]: 2025-10-08 16:42:21.070 2 WARNING neutronclient.v2_0.client [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:21 compute-0 nova_compute[117413]: 2025-10-08 16:42:21.440 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:42:21 compute-0 nova_compute[117413]: 2025-10-08 16:42:21.440 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:42:21 compute-0 nova_compute[117413]: 2025-10-08 16:42:21.440 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:42:21 compute-0 nova_compute[117413]: 2025-10-08 16:42:21.441 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:42:21 compute-0 nova_compute[117413]: 2025-10-08 16:42:21.441 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:42:21 compute-0 nova_compute[117413]: 2025-10-08 16:42:21.441 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:42:21 compute-0 nova_compute[117413]: 2025-10-08 16:42:21.456 2 WARNING neutronclient.v2_0.client [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:21 compute-0 podman[152798]: 2025-10-08 16:42:21.516477558 +0000 UTC m=+0.115202361 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 08 16:42:21 compute-0 nova_compute[117413]: 2025-10-08 16:42:21.705 2 DEBUG nova.network.neutron [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Updating instance_info_cache with network_info: [{"id": "8b9ee268-2cde-42eb-bebc-3158c7b83c25", "address": "fa:16:3e:b0:37:0a", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b9ee268-2c", "ovs_interfaceid": "8b9ee268-2cde-42eb-bebc-3158c7b83c25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:42:21 compute-0 nova_compute[117413]: 2025-10-08 16:42:21.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.212 2 DEBUG oslo_concurrency.lockutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-e2235ad2-cf92-464b-b586-698378cef322" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.225 2 DEBUG nova.virt.libvirt.driver [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpq9kplm9u',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='e2235ad2-cf92-464b-b586-698378cef322',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11737
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.226 2 DEBUG nova.virt.libvirt.driver [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Creating instance directory: /var/lib/nova/instances/e2235ad2-cf92-464b-b586-698378cef322 pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11750
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.227 2 DEBUG nova.virt.libvirt.driver [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Creating disk.info with the contents: {'/var/lib/nova/instances/e2235ad2-cf92-464b-b586-698378cef322/disk': 'qcow2', '/var/lib/nova/instances/e2235ad2-cf92-464b-b586-698378cef322/disk.config': 'raw'} pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11764
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.228 2 DEBUG nova.virt.libvirt.driver [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11774
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.229 2 DEBUG nova.objects.instance [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e2235ad2-cf92-464b-b586-698378cef322 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.736 2 DEBUG oslo_utils.imageutils.format_inspector [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.739 2 DEBUG oslo_utils.imageutils.format_inspector [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.740 2 DEBUG oslo_concurrency.processutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.811 2 DEBUG oslo_concurrency.processutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.812 2 DEBUG oslo_concurrency.lockutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.812 2 DEBUG oslo_concurrency.lockutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.813 2 DEBUG oslo_utils.imageutils.format_inspector [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.817 2 DEBUG oslo_utils.imageutils.format_inspector [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.818 2 DEBUG oslo_concurrency.processutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.883 2 DEBUG oslo_concurrency.processutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.884 2 DEBUG oslo_concurrency.processutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/e2235ad2-cf92-464b-b586-698378cef322/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.918 2 DEBUG oslo_concurrency.processutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/e2235ad2-cf92-464b-b586-698378cef322/disk 1073741824" returned: 0 in 0.034s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.919 2 DEBUG oslo_concurrency.lockutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.107s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.919 2 DEBUG oslo_concurrency.processutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.981 2 DEBUG oslo_concurrency.processutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.983 2 DEBUG nova.virt.disk.api [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Checking if we can resize image /var/lib/nova/instances/e2235ad2-cf92-464b-b586-698378cef322/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:42:22 compute-0 nova_compute[117413]: 2025-10-08 16:42:22.983 2 DEBUG oslo_concurrency.processutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e2235ad2-cf92-464b-b586-698378cef322/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.042 2 DEBUG oslo_concurrency.processutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e2235ad2-cf92-464b-b586-698378cef322/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.044 2 DEBUG nova.virt.disk.api [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Cannot resize image /var/lib/nova/instances/e2235ad2-cf92-464b-b586-698378cef322/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.044 2 DEBUG nova.objects.instance [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'migration_context' on Instance uuid e2235ad2-cf92-464b-b586-698378cef322 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:42:23 compute-0 podman[152833]: 2025-10-08 16:42:23.464601595 +0000 UTC m=+0.064769292 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007)
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.552 2 DEBUG nova.objects.base [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Object Instance<e2235ad2-cf92-464b-b586-698378cef322> lazy-loaded attributes: trusted_certs,migration_context wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.553 2 DEBUG oslo_concurrency.processutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/e2235ad2-cf92-464b-b586-698378cef322/disk.config 497664 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.595 2 DEBUG oslo_concurrency.processutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/e2235ad2-cf92-464b-b586-698378cef322/disk.config 497664" returned: 0 in 0.042s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.597 2 DEBUG nova.virt.libvirt.driver [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11704
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.599 2 DEBUG nova.virt.libvirt.vif [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-08T16:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-1903584993',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-1903584',id=29,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:41:17Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a137143db2f84a9f89a9bfc5d20558d0',ramdisk_id='',reservation_id='r-b0e0x51p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-817748354',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-817748354-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:41:17Z,user_data=None,user_id='aa1d2ca0056143d982599cc1b9f8587d',uuid=e2235ad2-cf92-464b-b586-698378cef322,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b9ee268-2cde-42eb-bebc-3158c7b83c25", "address": "fa:16:3e:b0:37:0a", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap8b9ee268-2c", "ovs_interfaceid": "8b9ee268-2cde-42eb-bebc-3158c7b83c25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.600 2 DEBUG nova.network.os_vif_util [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converting VIF {"id": "8b9ee268-2cde-42eb-bebc-3158c7b83c25", "address": "fa:16:3e:b0:37:0a", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap8b9ee268-2c", "ovs_interfaceid": "8b9ee268-2cde-42eb-bebc-3158c7b83c25", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.601 2 DEBUG nova.network.os_vif_util [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:37:0a,bridge_name='br-int',has_traffic_filtering=True,id=8b9ee268-2cde-42eb-bebc-3158c7b83c25,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b9ee268-2c') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.602 2 DEBUG os_vif [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:37:0a,bridge_name='br-int',has_traffic_filtering=True,id=8b9ee268-2cde-42eb-bebc-3158c7b83c25,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b9ee268-2c') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.604 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.605 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.606 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '87ba6f63-cba7-5731-9ce2-dd457a4be273', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.613 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b9ee268-2c, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.614 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap8b9ee268-2c, col_values=(('qos', UUID('e9af0304-af98-4d0c-ab61-b15679ac7a2f')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.615 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap8b9ee268-2c, col_values=(('external_ids', {'iface-id': '8b9ee268-2cde-42eb-bebc-3158c7b83c25', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:37:0a', 'vm-uuid': 'e2235ad2-cf92-464b-b586-698378cef322'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:23 compute-0 NetworkManager[1034]: <info>  [1759941743.6177] manager: (tap8b9ee268-2c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.628 2 INFO os_vif [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:37:0a,bridge_name='br-int',has_traffic_filtering=True,id=8b9ee268-2cde-42eb-bebc-3158c7b83c25,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b9ee268-2c')
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.628 2 DEBUG nova.virt.libvirt.driver [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11851
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.628 2 DEBUG nova.compute.manager [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpq9kplm9u',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='e2235ad2-cf92-464b-b586-698378cef322',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9377
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.629 2 WARNING neutronclient.v2_0.client [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:23 compute-0 nova_compute[117413]: 2025-10-08 16:42:23.952 2 WARNING neutronclient.v2_0.client [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:25 compute-0 nova_compute[117413]: 2025-10-08 16:42:25.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:42:26 compute-0 nova_compute[117413]: 2025-10-08 16:42:26.595 2 DEBUG nova.network.neutron [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Port 8b9ee268-2cde-42eb-bebc-3158c7b83c25 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.12/site-packages/nova/network/neutron.py:356
Oct 08 16:42:26 compute-0 nova_compute[117413]: 2025-10-08 16:42:26.609 2 DEBUG nova.compute.manager [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpq9kplm9u',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='e2235ad2-cf92-464b-b586-698378cef322',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9443
Oct 08 16:42:26 compute-0 nova_compute[117413]: 2025-10-08 16:42:26.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:27 compute-0 nova_compute[117413]: 2025-10-08 16:42:27.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:42:28 compute-0 podman[152858]: 2025-10-08 16:42:28.454800707 +0000 UTC m=+0.060269133 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 08 16:42:28 compute-0 podman[152859]: 2025-10-08 16:42:28.509825658 +0000 UTC m=+0.107902892 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest)
Oct 08 16:42:28 compute-0 nova_compute[117413]: 2025-10-08 16:42:28.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:29 compute-0 kernel: tap8b9ee268-2c: entered promiscuous mode
Oct 08 16:42:29 compute-0 NetworkManager[1034]: <info>  [1759941749.2393] manager: (tap8b9ee268-2c): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Oct 08 16:42:29 compute-0 ovn_controller[19768]: 2025-10-08T16:42:29Z|00249|binding|INFO|Claiming lport 8b9ee268-2cde-42eb-bebc-3158c7b83c25 for this additional chassis.
Oct 08 16:42:29 compute-0 ovn_controller[19768]: 2025-10-08T16:42:29Z|00250|binding|INFO|8b9ee268-2cde-42eb-bebc-3158c7b83c25: Claiming fa:16:3e:b0:37:0a 10.100.0.8
Oct 08 16:42:29 compute-0 nova_compute[117413]: 2025-10-08 16:42:29.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:29.267 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:37:0a 10.100.0.8'], port_security=['fa:16:3e:b0:37:0a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e2235ad2-cf92-464b-b586-698378cef322', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2742327d-2338-460e-952e-6446bba2b03f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a137143db2f84a9f89a9bfc5d20558d0', 'neutron:revision_number': '10', 'neutron:security_group_ids': '90e43186-0cc8-407e-ae2a-ee1daa701739', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d14e964-e6b6-456f-bb61-aa39c15f6e3f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=8b9ee268-2cde-42eb-bebc-3158c7b83c25) old=Port_Binding(additional_chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:42:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:29.268 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 8b9ee268-2cde-42eb-bebc-3158c7b83c25 in datapath 2742327d-2338-460e-952e-6446bba2b03f unbound from our chassis
Oct 08 16:42:29 compute-0 ovn_controller[19768]: 2025-10-08T16:42:29Z|00251|binding|INFO|Setting lport 8b9ee268-2cde-42eb-bebc-3158c7b83c25 ovn-installed in OVS
Oct 08 16:42:29 compute-0 nova_compute[117413]: 2025-10-08 16:42:29.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:29.270 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2742327d-2338-460e-952e-6446bba2b03f
Oct 08 16:42:29 compute-0 nova_compute[117413]: 2025-10-08 16:42:29.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:29 compute-0 systemd-udevd[152922]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:42:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:29.295 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[7b35db00-8c3d-4ef6-86d6-c1670bc71f8e]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:29 compute-0 systemd-machined[77548]: New machine qemu-22-instance-0000001d.
Oct 08 16:42:29 compute-0 NetworkManager[1034]: <info>  [1759941749.3016] device (tap8b9ee268-2c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:42:29 compute-0 NetworkManager[1034]: <info>  [1759941749.3027] device (tap8b9ee268-2c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:42:29 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-0000001d.
Oct 08 16:42:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:29.338 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[68ae2c9e-bc52-410f-a366-982c0804ee9e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:29.341 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[eb0f0524-0490-4b90-aadd-0db178275302]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:29.378 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[31c5b066-2da4-4b14-ae36-a44eea6085e2]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:29.407 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[ef5a26b8-2233-48e1-a319-563e047b4d37]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2742327d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:54:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 1672, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 1672, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 300931, 'reachable_time': 36952, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 152936, 'error': None, 'target': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:29.434 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[b8b45fec-68af-4ead-b536-08846e8aacc2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2742327d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 300945, 'tstamp': 300945}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 152938, 'error': None, 'target': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2742327d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 300948, 'tstamp': 300948}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 152938, 'error': None, 'target': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:29.436 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2742327d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:42:29 compute-0 nova_compute[117413]: 2025-10-08 16:42:29.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:29 compute-0 nova_compute[117413]: 2025-10-08 16:42:29.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:29.442 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2742327d-20, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:42:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:29.442 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:42:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:29.442 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2742327d-20, col_values=(('external_ids', {'iface-id': '8a2f4b59-05b7-414e-b353-f44ee56d820a'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:42:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:29.442 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:42:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:29.444 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[838685bc-2a1b-423c-ab9e-ab1c1b0a5c1c]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-2742327d-2338-460e-952e-6446bba2b03f\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 2742327d-2338-460e-952e-6446bba2b03f\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:29 compute-0 podman[127881]: time="2025-10-08T16:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:42:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:42:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3500 "" "Go-http-client/1.1"
Oct 08 16:42:31 compute-0 openstack_network_exporter[130039]: ERROR   16:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:42:31 compute-0 openstack_network_exporter[130039]: ERROR   16:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:42:31 compute-0 openstack_network_exporter[130039]: ERROR   16:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:42:31 compute-0 openstack_network_exporter[130039]: ERROR   16:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:42:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:42:31 compute-0 openstack_network_exporter[130039]: ERROR   16:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:42:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:42:31 compute-0 nova_compute[117413]: 2025-10-08 16:42:31.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:32 compute-0 ovn_controller[19768]: 2025-10-08T16:42:32Z|00252|binding|INFO|Claiming lport 8b9ee268-2cde-42eb-bebc-3158c7b83c25 for this chassis.
Oct 08 16:42:32 compute-0 ovn_controller[19768]: 2025-10-08T16:42:32Z|00253|binding|INFO|8b9ee268-2cde-42eb-bebc-3158c7b83c25: Claiming fa:16:3e:b0:37:0a 10.100.0.8
Oct 08 16:42:32 compute-0 ovn_controller[19768]: 2025-10-08T16:42:32Z|00254|binding|INFO|Setting lport 8b9ee268-2cde-42eb-bebc-3158c7b83c25 up in Southbound
Oct 08 16:42:33 compute-0 nova_compute[117413]: 2025-10-08 16:42:33.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:34 compute-0 nova_compute[117413]: 2025-10-08 16:42:34.028 2 INFO nova.compute.manager [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Post operation of migration started
Oct 08 16:42:34 compute-0 nova_compute[117413]: 2025-10-08 16:42:34.030 2 WARNING neutronclient.v2_0.client [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:34 compute-0 nova_compute[117413]: 2025-10-08 16:42:34.113 2 WARNING neutronclient.v2_0.client [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:34 compute-0 nova_compute[117413]: 2025-10-08 16:42:34.114 2 WARNING neutronclient.v2_0.client [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:34 compute-0 nova_compute[117413]: 2025-10-08 16:42:34.313 2 DEBUG oslo_concurrency.lockutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-e2235ad2-cf92-464b-b586-698378cef322" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:42:34 compute-0 nova_compute[117413]: 2025-10-08 16:42:34.314 2 DEBUG oslo_concurrency.lockutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-e2235ad2-cf92-464b-b586-698378cef322" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:42:34 compute-0 nova_compute[117413]: 2025-10-08 16:42:34.314 2 DEBUG nova.network.neutron [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:42:34 compute-0 nova_compute[117413]: 2025-10-08 16:42:34.831 2 WARNING neutronclient.v2_0.client [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:35 compute-0 nova_compute[117413]: 2025-10-08 16:42:35.303 2 WARNING neutronclient.v2_0.client [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:35 compute-0 nova_compute[117413]: 2025-10-08 16:42:35.466 2 DEBUG nova.network.neutron [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Updating instance_info_cache with network_info: [{"id": "8b9ee268-2cde-42eb-bebc-3158c7b83c25", "address": "fa:16:3e:b0:37:0a", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b9ee268-2c", "ovs_interfaceid": "8b9ee268-2cde-42eb-bebc-3158c7b83c25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:42:35 compute-0 nova_compute[117413]: 2025-10-08 16:42:35.972 2 DEBUG oslo_concurrency.lockutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-e2235ad2-cf92-464b-b586-698378cef322" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:42:36 compute-0 nova_compute[117413]: 2025-10-08 16:42:36.502 2 DEBUG oslo_concurrency.lockutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:42:36 compute-0 nova_compute[117413]: 2025-10-08 16:42:36.502 2 DEBUG oslo_concurrency.lockutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:42:36 compute-0 nova_compute[117413]: 2025-10-08 16:42:36.502 2 DEBUG oslo_concurrency.lockutils [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:42:36 compute-0 nova_compute[117413]: 2025-10-08 16:42:36.507 2 INFO nova.virt.libvirt.driver [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 08 16:42:36 compute-0 virtqemud[117740]: Domain id=22 name='instance-0000001d' uuid=e2235ad2-cf92-464b-b586-698378cef322 is tainted: custom-monitor
Oct 08 16:42:36 compute-0 nova_compute[117413]: 2025-10-08 16:42:36.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:37 compute-0 nova_compute[117413]: 2025-10-08 16:42:37.357 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:42:37 compute-0 nova_compute[117413]: 2025-10-08 16:42:37.517 2 INFO nova.virt.libvirt.driver [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 08 16:42:38 compute-0 nova_compute[117413]: 2025-10-08 16:42:38.524 2 INFO nova.virt.libvirt.driver [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 08 16:42:38 compute-0 nova_compute[117413]: 2025-10-08 16:42:38.531 2 DEBUG nova.compute.manager [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:42:38 compute-0 nova_compute[117413]: 2025-10-08 16:42:38.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:39 compute-0 nova_compute[117413]: 2025-10-08 16:42:39.040 2 DEBUG nova.objects.instance [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Oct 08 16:42:39 compute-0 podman[152961]: 2025-10-08 16:42:39.473086919 +0000 UTC m=+0.082197863 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:42:40 compute-0 nova_compute[117413]: 2025-10-08 16:42:40.066 2 WARNING neutronclient.v2_0.client [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:40 compute-0 nova_compute[117413]: 2025-10-08 16:42:40.340 2 WARNING neutronclient.v2_0.client [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:40 compute-0 nova_compute[117413]: 2025-10-08 16:42:40.341 2 WARNING neutronclient.v2_0.client [None req-c9f5a933-587e-428b-8c03-b97732ebe1e9 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:41 compute-0 nova_compute[117413]: 2025-10-08 16:42:41.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:41.935 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:42:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:41.936 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:42:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:41.936 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:42:43 compute-0 nova_compute[117413]: 2025-10-08 16:42:43.260 2 DEBUG oslo_concurrency.lockutils [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Acquiring lock "e2235ad2-cf92-464b-b586-698378cef322" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:42:43 compute-0 nova_compute[117413]: 2025-10-08 16:42:43.261 2 DEBUG oslo_concurrency.lockutils [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "e2235ad2-cf92-464b-b586-698378cef322" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:42:43 compute-0 nova_compute[117413]: 2025-10-08 16:42:43.262 2 DEBUG oslo_concurrency.lockutils [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Acquiring lock "e2235ad2-cf92-464b-b586-698378cef322-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:42:43 compute-0 nova_compute[117413]: 2025-10-08 16:42:43.262 2 DEBUG oslo_concurrency.lockutils [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "e2235ad2-cf92-464b-b586-698378cef322-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:42:43 compute-0 nova_compute[117413]: 2025-10-08 16:42:43.262 2 DEBUG oslo_concurrency.lockutils [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "e2235ad2-cf92-464b-b586-698378cef322-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:42:43 compute-0 nova_compute[117413]: 2025-10-08 16:42:43.278 2 INFO nova.compute.manager [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Terminating instance
Oct 08 16:42:43 compute-0 nova_compute[117413]: 2025-10-08 16:42:43.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:43 compute-0 nova_compute[117413]: 2025-10-08 16:42:43.798 2 DEBUG nova.compute.manager [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:42:43 compute-0 kernel: tap8b9ee268-2c (unregistering): left promiscuous mode
Oct 08 16:42:43 compute-0 NetworkManager[1034]: <info>  [1759941763.8216] device (tap8b9ee268-2c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:42:43 compute-0 nova_compute[117413]: 2025-10-08 16:42:43.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:43 compute-0 ovn_controller[19768]: 2025-10-08T16:42:43Z|00255|binding|INFO|Releasing lport 8b9ee268-2cde-42eb-bebc-3158c7b83c25 from this chassis (sb_readonly=0)
Oct 08 16:42:43 compute-0 ovn_controller[19768]: 2025-10-08T16:42:43Z|00256|binding|INFO|Setting lport 8b9ee268-2cde-42eb-bebc-3158c7b83c25 down in Southbound
Oct 08 16:42:43 compute-0 ovn_controller[19768]: 2025-10-08T16:42:43Z|00257|binding|INFO|Removing iface tap8b9ee268-2c ovn-installed in OVS
Oct 08 16:42:43 compute-0 nova_compute[117413]: 2025-10-08 16:42:43.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:43.844 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:37:0a 10.100.0.8'], port_security=['fa:16:3e:b0:37:0a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e2235ad2-cf92-464b-b586-698378cef322', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2742327d-2338-460e-952e-6446bba2b03f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a137143db2f84a9f89a9bfc5d20558d0', 'neutron:revision_number': '15', 'neutron:security_group_ids': '90e43186-0cc8-407e-ae2a-ee1daa701739', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d14e964-e6b6-456f-bb61-aa39c15f6e3f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=8b9ee268-2cde-42eb-bebc-3158c7b83c25) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:42:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:43.846 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 8b9ee268-2cde-42eb-bebc-3158c7b83c25 in datapath 2742327d-2338-460e-952e-6446bba2b03f unbound from our chassis
Oct 08 16:42:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:43.848 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2742327d-2338-460e-952e-6446bba2b03f
Oct 08 16:42:43 compute-0 nova_compute[117413]: 2025-10-08 16:42:43.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:43.871 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec89c8c-4eb2-4fc8-81a9-441986a75766]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:43 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Oct 08 16:42:43 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000001d.scope: Consumed 3.105s CPU time.
Oct 08 16:42:43 compute-0 systemd-machined[77548]: Machine qemu-22-instance-0000001d terminated.
Oct 08 16:42:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:43.901 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[accd4992-19a1-4921-85f0-604507933e6a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:43.904 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[e388a487-9477-4492-96bc-fa81b81c5e3a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:43.936 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[01bd9aa5-99e0-40a0-a618-e95070da8f50]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:43.956 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[93b5b17a-3bce-4494-8217-26a1220e18f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2742327d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:54:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 48, 'tx_packets': 7, 'rx_bytes': 2512, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 48, 'tx_packets': 7, 'rx_bytes': 2512, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 300931, 'reachable_time': 36952, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 152995, 'error': None, 'target': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:43.976 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[3d815063-ac80-4198-9a93-cd1875262efa]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2742327d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 300945, 'tstamp': 300945}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 152996, 'error': None, 'target': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2742327d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 300948, 'tstamp': 300948}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 152996, 'error': None, 'target': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:43.977 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2742327d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:42:43 compute-0 nova_compute[117413]: 2025-10-08 16:42:43.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:43 compute-0 nova_compute[117413]: 2025-10-08 16:42:43.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:43.985 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2742327d-20, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:42:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:43.985 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:42:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:43.985 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2742327d-20, col_values=(('external_ids', {'iface-id': '8a2f4b59-05b7-414e-b353-f44ee56d820a'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:42:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:43.986 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:42:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:43.987 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[c357270f-6560-48f0-83cb-2c33db264e11]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-2742327d-2338-460e-952e-6446bba2b03f\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID 2742327d-2338-460e-952e-6446bba2b03f\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:43 compute-0 nova_compute[117413]: 2025-10-08 16:42:43.989 2 DEBUG nova.compute.manager [req-0a8ad7fc-f6e5-431e-b8c3-5690a94c81cc req-7d3f37dd-e6b5-4001-8f35-3287656c0f0d c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Received event network-vif-unplugged-8b9ee268-2cde-42eb-bebc-3158c7b83c25 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:42:43 compute-0 nova_compute[117413]: 2025-10-08 16:42:43.989 2 DEBUG oslo_concurrency.lockutils [req-0a8ad7fc-f6e5-431e-b8c3-5690a94c81cc req-7d3f37dd-e6b5-4001-8f35-3287656c0f0d c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "e2235ad2-cf92-464b-b586-698378cef322-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:42:43 compute-0 nova_compute[117413]: 2025-10-08 16:42:43.989 2 DEBUG oslo_concurrency.lockutils [req-0a8ad7fc-f6e5-431e-b8c3-5690a94c81cc req-7d3f37dd-e6b5-4001-8f35-3287656c0f0d c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "e2235ad2-cf92-464b-b586-698378cef322-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:42:43 compute-0 nova_compute[117413]: 2025-10-08 16:42:43.989 2 DEBUG oslo_concurrency.lockutils [req-0a8ad7fc-f6e5-431e-b8c3-5690a94c81cc req-7d3f37dd-e6b5-4001-8f35-3287656c0f0d c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "e2235ad2-cf92-464b-b586-698378cef322-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:42:43 compute-0 nova_compute[117413]: 2025-10-08 16:42:43.990 2 DEBUG nova.compute.manager [req-0a8ad7fc-f6e5-431e-b8c3-5690a94c81cc req-7d3f37dd-e6b5-4001-8f35-3287656c0f0d c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] No waiting events found dispatching network-vif-unplugged-8b9ee268-2cde-42eb-bebc-3158c7b83c25 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:42:43 compute-0 nova_compute[117413]: 2025-10-08 16:42:43.990 2 DEBUG nova.compute.manager [req-0a8ad7fc-f6e5-431e-b8c3-5690a94c81cc req-7d3f37dd-e6b5-4001-8f35-3287656c0f0d c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Received event network-vif-unplugged-8b9ee268-2cde-42eb-bebc-3158c7b83c25 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:42:44 compute-0 nova_compute[117413]: 2025-10-08 16:42:44.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:44 compute-0 nova_compute[117413]: 2025-10-08 16:42:44.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:44 compute-0 nova_compute[117413]: 2025-10-08 16:42:44.069 2 INFO nova.virt.libvirt.driver [-] [instance: e2235ad2-cf92-464b-b586-698378cef322] Instance destroyed successfully.
Oct 08 16:42:44 compute-0 nova_compute[117413]: 2025-10-08 16:42:44.070 2 DEBUG nova.objects.instance [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lazy-loading 'resources' on Instance uuid e2235ad2-cf92-464b-b586-698378cef322 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:42:44 compute-0 nova_compute[117413]: 2025-10-08 16:42:44.577 2 DEBUG nova.virt.libvirt.vif [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,compute_id=2,config_drive='True',created_at=2025-10-08T16:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-1903584993',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-1903584',id=29,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:41:17Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a137143db2f84a9f89a9bfc5d20558d0',ramdisk_id='',reservation_id='r-b0e0x51p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',clean_attempts='1',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-817748354',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-817748354-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:42:39Z,user_data=None,user_id='aa1d2ca0056143d982599cc1b9f8587d',uuid=e2235ad2-cf92-464b-b586-698378cef322,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b9ee268-2cde-42eb-bebc-3158c7b83c25", "address": "fa:16:3e:b0:37:0a", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b9ee268-2c", "ovs_interfaceid": "8b9ee268-2cde-42eb-bebc-3158c7b83c25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:42:44 compute-0 nova_compute[117413]: 2025-10-08 16:42:44.577 2 DEBUG nova.network.os_vif_util [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Converting VIF {"id": "8b9ee268-2cde-42eb-bebc-3158c7b83c25", "address": "fa:16:3e:b0:37:0a", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b9ee268-2c", "ovs_interfaceid": "8b9ee268-2cde-42eb-bebc-3158c7b83c25", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:42:44 compute-0 nova_compute[117413]: 2025-10-08 16:42:44.578 2 DEBUG nova.network.os_vif_util [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b0:37:0a,bridge_name='br-int',has_traffic_filtering=True,id=8b9ee268-2cde-42eb-bebc-3158c7b83c25,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b9ee268-2c') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:42:44 compute-0 nova_compute[117413]: 2025-10-08 16:42:44.579 2 DEBUG os_vif [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:37:0a,bridge_name='br-int',has_traffic_filtering=True,id=8b9ee268-2cde-42eb-bebc-3158c7b83c25,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b9ee268-2c') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:42:44 compute-0 nova_compute[117413]: 2025-10-08 16:42:44.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:44 compute-0 nova_compute[117413]: 2025-10-08 16:42:44.581 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b9ee268-2c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:42:44 compute-0 nova_compute[117413]: 2025-10-08 16:42:44.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:44 compute-0 nova_compute[117413]: 2025-10-08 16:42:44.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:44 compute-0 nova_compute[117413]: 2025-10-08 16:42:44.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:44 compute-0 nova_compute[117413]: 2025-10-08 16:42:44.585 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=e9af0304-af98-4d0c-ab61-b15679ac7a2f) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:42:44 compute-0 nova_compute[117413]: 2025-10-08 16:42:44.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:44 compute-0 nova_compute[117413]: 2025-10-08 16:42:44.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:44 compute-0 nova_compute[117413]: 2025-10-08 16:42:44.590 2 INFO os_vif [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:37:0a,bridge_name='br-int',has_traffic_filtering=True,id=8b9ee268-2cde-42eb-bebc-3158c7b83c25,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b9ee268-2c')
Oct 08 16:42:44 compute-0 nova_compute[117413]: 2025-10-08 16:42:44.591 2 INFO nova.virt.libvirt.driver [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Deleting instance files /var/lib/nova/instances/e2235ad2-cf92-464b-b586-698378cef322_del
Oct 08 16:42:44 compute-0 nova_compute[117413]: 2025-10-08 16:42:44.592 2 INFO nova.virt.libvirt.driver [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Deletion of /var/lib/nova/instances/e2235ad2-cf92-464b-b586-698378cef322_del complete
Oct 08 16:42:45 compute-0 nova_compute[117413]: 2025-10-08 16:42:45.108 2 INFO nova.compute.manager [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Took 1.31 seconds to destroy the instance on the hypervisor.
Oct 08 16:42:45 compute-0 nova_compute[117413]: 2025-10-08 16:42:45.109 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:42:45 compute-0 nova_compute[117413]: 2025-10-08 16:42:45.110 2 DEBUG nova.compute.manager [-] [instance: e2235ad2-cf92-464b-b586-698378cef322] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:42:45 compute-0 nova_compute[117413]: 2025-10-08 16:42:45.110 2 DEBUG nova.network.neutron [-] [instance: e2235ad2-cf92-464b-b586-698378cef322] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:42:45 compute-0 nova_compute[117413]: 2025-10-08 16:42:45.111 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:45 compute-0 nova_compute[117413]: 2025-10-08 16:42:45.322 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:45 compute-0 podman[153014]: 2025-10-08 16:42:45.461890751 +0000 UTC m=+0.065145353 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_id=edpm, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, release=1755695350, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 08 16:42:45 compute-0 nova_compute[117413]: 2025-10-08 16:42:45.624 2 DEBUG nova.compute.manager [req-251d59d7-9569-4d02-a39b-87706ca5650b req-80ec510d-a7e9-4454-a983-00ce959bd9ce c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Received event network-vif-deleted-8b9ee268-2cde-42eb-bebc-3158c7b83c25 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:42:45 compute-0 nova_compute[117413]: 2025-10-08 16:42:45.625 2 INFO nova.compute.manager [req-251d59d7-9569-4d02-a39b-87706ca5650b req-80ec510d-a7e9-4454-a983-00ce959bd9ce c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Neutron deleted interface 8b9ee268-2cde-42eb-bebc-3158c7b83c25; detaching it from the instance and deleting it from the info cache
Oct 08 16:42:45 compute-0 nova_compute[117413]: 2025-10-08 16:42:45.625 2 DEBUG nova.network.neutron [req-251d59d7-9569-4d02-a39b-87706ca5650b req-80ec510d-a7e9-4454-a983-00ce959bd9ce c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:42:46 compute-0 nova_compute[117413]: 2025-10-08 16:42:46.048 2 DEBUG nova.compute.manager [req-9af6b9b6-fa16-472d-9e10-31eb51622d01 req-b31b0f45-b5f1-46a5-9e59-4964458ee2d7 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Received event network-vif-unplugged-8b9ee268-2cde-42eb-bebc-3158c7b83c25 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:42:46 compute-0 nova_compute[117413]: 2025-10-08 16:42:46.049 2 DEBUG oslo_concurrency.lockutils [req-9af6b9b6-fa16-472d-9e10-31eb51622d01 req-b31b0f45-b5f1-46a5-9e59-4964458ee2d7 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "e2235ad2-cf92-464b-b586-698378cef322-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:42:46 compute-0 nova_compute[117413]: 2025-10-08 16:42:46.049 2 DEBUG oslo_concurrency.lockutils [req-9af6b9b6-fa16-472d-9e10-31eb51622d01 req-b31b0f45-b5f1-46a5-9e59-4964458ee2d7 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "e2235ad2-cf92-464b-b586-698378cef322-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:42:46 compute-0 nova_compute[117413]: 2025-10-08 16:42:46.049 2 DEBUG oslo_concurrency.lockutils [req-9af6b9b6-fa16-472d-9e10-31eb51622d01 req-b31b0f45-b5f1-46a5-9e59-4964458ee2d7 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "e2235ad2-cf92-464b-b586-698378cef322-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:42:46 compute-0 nova_compute[117413]: 2025-10-08 16:42:46.049 2 DEBUG nova.compute.manager [req-9af6b9b6-fa16-472d-9e10-31eb51622d01 req-b31b0f45-b5f1-46a5-9e59-4964458ee2d7 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] No waiting events found dispatching network-vif-unplugged-8b9ee268-2cde-42eb-bebc-3158c7b83c25 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:42:46 compute-0 nova_compute[117413]: 2025-10-08 16:42:46.049 2 DEBUG nova.compute.manager [req-9af6b9b6-fa16-472d-9e10-31eb51622d01 req-b31b0f45-b5f1-46a5-9e59-4964458ee2d7 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Received event network-vif-unplugged-8b9ee268-2cde-42eb-bebc-3158c7b83c25 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:42:46 compute-0 nova_compute[117413]: 2025-10-08 16:42:46.084 2 DEBUG nova.network.neutron [-] [instance: e2235ad2-cf92-464b-b586-698378cef322] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:42:46 compute-0 nova_compute[117413]: 2025-10-08 16:42:46.132 2 DEBUG nova.compute.manager [req-251d59d7-9569-4d02-a39b-87706ca5650b req-80ec510d-a7e9-4454-a983-00ce959bd9ce c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: e2235ad2-cf92-464b-b586-698378cef322] Detach interface failed, port_id=8b9ee268-2cde-42eb-bebc-3158c7b83c25, reason: Instance e2235ad2-cf92-464b-b586-698378cef322 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 08 16:42:46 compute-0 nova_compute[117413]: 2025-10-08 16:42:46.590 2 INFO nova.compute.manager [-] [instance: e2235ad2-cf92-464b-b586-698378cef322] Took 1.48 seconds to deallocate network for instance.
Oct 08 16:42:46 compute-0 nova_compute[117413]: 2025-10-08 16:42:46.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:47 compute-0 nova_compute[117413]: 2025-10-08 16:42:47.113 2 DEBUG oslo_concurrency.lockutils [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:42:47 compute-0 nova_compute[117413]: 2025-10-08 16:42:47.114 2 DEBUG oslo_concurrency.lockutils [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:42:47 compute-0 nova_compute[117413]: 2025-10-08 16:42:47.122 2 DEBUG oslo_concurrency.lockutils [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.008s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:42:47 compute-0 nova_compute[117413]: 2025-10-08 16:42:47.151 2 INFO nova.scheduler.client.report [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Deleted allocations for instance e2235ad2-cf92-464b-b586-698378cef322
Oct 08 16:42:48 compute-0 nova_compute[117413]: 2025-10-08 16:42:48.179 2 DEBUG oslo_concurrency.lockutils [None req-72a3ad7f-35d5-4577-95c7-161768a89803 aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "e2235ad2-cf92-464b-b586-698378cef322" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.917s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:42:49 compute-0 nova_compute[117413]: 2025-10-08 16:42:49.035 2 DEBUG oslo_concurrency.lockutils [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Acquiring lock "e1327151-bae2-40dd-a12d-90799e91c86d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:42:49 compute-0 nova_compute[117413]: 2025-10-08 16:42:49.036 2 DEBUG oslo_concurrency.lockutils [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "e1327151-bae2-40dd-a12d-90799e91c86d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:42:49 compute-0 nova_compute[117413]: 2025-10-08 16:42:49.036 2 DEBUG oslo_concurrency.lockutils [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Acquiring lock "e1327151-bae2-40dd-a12d-90799e91c86d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:42:49 compute-0 nova_compute[117413]: 2025-10-08 16:42:49.036 2 DEBUG oslo_concurrency.lockutils [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "e1327151-bae2-40dd-a12d-90799e91c86d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:42:49 compute-0 nova_compute[117413]: 2025-10-08 16:42:49.037 2 DEBUG oslo_concurrency.lockutils [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "e1327151-bae2-40dd-a12d-90799e91c86d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:42:49 compute-0 nova_compute[117413]: 2025-10-08 16:42:49.050 2 INFO nova.compute.manager [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Terminating instance
Oct 08 16:42:49 compute-0 nova_compute[117413]: 2025-10-08 16:42:49.565 2 DEBUG nova.compute.manager [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:42:49 compute-0 nova_compute[117413]: 2025-10-08 16:42:49.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:49 compute-0 kernel: tap83237a56-70 (unregistering): left promiscuous mode
Oct 08 16:42:49 compute-0 NetworkManager[1034]: <info>  [1759941769.5964] device (tap83237a56-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:42:49 compute-0 nova_compute[117413]: 2025-10-08 16:42:49.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:49 compute-0 ovn_controller[19768]: 2025-10-08T16:42:49Z|00258|binding|INFO|Releasing lport 83237a56-7017-4670-921f-7758d20aaae4 from this chassis (sb_readonly=0)
Oct 08 16:42:49 compute-0 ovn_controller[19768]: 2025-10-08T16:42:49Z|00259|binding|INFO|Setting lport 83237a56-7017-4670-921f-7758d20aaae4 down in Southbound
Oct 08 16:42:49 compute-0 ovn_controller[19768]: 2025-10-08T16:42:49Z|00260|binding|INFO|Removing iface tap83237a56-70 ovn-installed in OVS
Oct 08 16:42:49 compute-0 nova_compute[117413]: 2025-10-08 16:42:49.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:49.611 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:cd:c4 10.100.0.13'], port_security=['fa:16:3e:3d:cd:c4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e1327151-bae2-40dd-a12d-90799e91c86d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2742327d-2338-460e-952e-6446bba2b03f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a137143db2f84a9f89a9bfc5d20558d0', 'neutron:revision_number': '15', 'neutron:security_group_ids': '90e43186-0cc8-407e-ae2a-ee1daa701739', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d14e964-e6b6-456f-bb61-aa39c15f6e3f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=83237a56-7017-4670-921f-7758d20aaae4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:42:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:49.612 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 83237a56-7017-4670-921f-7758d20aaae4 in datapath 2742327d-2338-460e-952e-6446bba2b03f unbound from our chassis
Oct 08 16:42:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:49.613 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2742327d-2338-460e-952e-6446bba2b03f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:42:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:49.614 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[3939f617-3287-49b3-b385-659d7d4389bb]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:49.614 28633 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2742327d-2338-460e-952e-6446bba2b03f namespace which is not needed anymore
Oct 08 16:42:49 compute-0 nova_compute[117413]: 2025-10-08 16:42:49.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:49 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Oct 08 16:42:49 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d0000001c.scope: Consumed 3.677s CPU time.
Oct 08 16:42:49 compute-0 systemd-machined[77548]: Machine qemu-21-instance-0000001c terminated.
Oct 08 16:42:49 compute-0 neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f[152719]: [NOTICE]   (152723) : haproxy version is 3.0.5-8e879a5
Oct 08 16:42:49 compute-0 neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f[152719]: [NOTICE]   (152723) : path to executable is /usr/sbin/haproxy
Oct 08 16:42:49 compute-0 neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f[152719]: [WARNING]  (152723) : Exiting Master process...
Oct 08 16:42:49 compute-0 podman[153060]: 2025-10-08 16:42:49.737932969 +0000 UTC m=+0.031209178 container kill ec181658c253e8e9ea9f4e82766b9a98491b4076d1c11fb73045ccfebf918b99 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 08 16:42:49 compute-0 neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f[152719]: [ALERT]    (152723) : Current worker (152725) exited with code 143 (Terminated)
Oct 08 16:42:49 compute-0 neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f[152719]: [WARNING]  (152723) : All workers exited. Exiting... (0)
Oct 08 16:42:49 compute-0 systemd[1]: libpod-ec181658c253e8e9ea9f4e82766b9a98491b4076d1c11fb73045ccfebf918b99.scope: Deactivated successfully.
Oct 08 16:42:49 compute-0 podman[153075]: 2025-10-08 16:42:49.786456374 +0000 UTC m=+0.027054199 container died ec181658c253e8e9ea9f4e82766b9a98491b4076d1c11fb73045ccfebf918b99 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 08 16:42:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ec181658c253e8e9ea9f4e82766b9a98491b4076d1c11fb73045ccfebf918b99-userdata-shm.mount: Deactivated successfully.
Oct 08 16:42:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-e554d77452114b2da6e77d40926f8f6cec812c2f8046e87a3d5217df29e8a2b4-merged.mount: Deactivated successfully.
Oct 08 16:42:49 compute-0 nova_compute[117413]: 2025-10-08 16:42:49.835 2 INFO nova.virt.libvirt.driver [-] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Instance destroyed successfully.
Oct 08 16:42:49 compute-0 nova_compute[117413]: 2025-10-08 16:42:49.836 2 DEBUG nova.objects.instance [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lazy-loading 'resources' on Instance uuid e1327151-bae2-40dd-a12d-90799e91c86d obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:42:49 compute-0 podman[153075]: 2025-10-08 16:42:49.839490028 +0000 UTC m=+0.080087833 container cleanup ec181658c253e8e9ea9f4e82766b9a98491b4076d1c11fb73045ccfebf918b99 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 08 16:42:49 compute-0 systemd[1]: libpod-conmon-ec181658c253e8e9ea9f4e82766b9a98491b4076d1c11fb73045ccfebf918b99.scope: Deactivated successfully.
Oct 08 16:42:49 compute-0 podman[153077]: 2025-10-08 16:42:49.858517815 +0000 UTC m=+0.090462701 container remove ec181658c253e8e9ea9f4e82766b9a98491b4076d1c11fb73045ccfebf918b99 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:42:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:49.876 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[af67d065-7fe5-49b2-acc0-c30d679ade13]: (4, ("Wed Oct  8 04:42:49 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f (ec181658c253e8e9ea9f4e82766b9a98491b4076d1c11fb73045ccfebf918b99)\nec181658c253e8e9ea9f4e82766b9a98491b4076d1c11fb73045ccfebf918b99\nWed Oct  8 04:42:49 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2742327d-2338-460e-952e-6446bba2b03f (ec181658c253e8e9ea9f4e82766b9a98491b4076d1c11fb73045ccfebf918b99)\nec181658c253e8e9ea9f4e82766b9a98491b4076d1c11fb73045ccfebf918b99\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:49.877 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[824e6b11-a0a4-4417-bf37-9e93b335917b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:49.878 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2742327d-2338-460e-952e-6446bba2b03f.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:42:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:49.878 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[71cc56a2-7f78-419b-8dad-125912f22124]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:49.879 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2742327d-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:42:49 compute-0 nova_compute[117413]: 2025-10-08 16:42:49.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:49 compute-0 kernel: tap2742327d-20: left promiscuous mode
Oct 08 16:42:49 compute-0 nova_compute[117413]: 2025-10-08 16:42:49.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:49 compute-0 nova_compute[117413]: 2025-10-08 16:42:49.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:49.898 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[a74450dc-9d85-4e38-b530-150254670453]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:49.926 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[b0b51a5d-b07f-4329-8ad0-a8479cdb8970]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:49.927 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[ad60baae-a756-405e-92cb-36a4ab4e13a4]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:49.944 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[3c3f323c-c09d-43aa-b4e3-461ef8e2753a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 300923, 'reachable_time': 29339, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 153129, 'error': None, 'target': 'ovnmeta-2742327d-2338-460e-952e-6446bba2b03f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:49.946 28777 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2742327d-2338-460e-952e-6446bba2b03f deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 08 16:42:49 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:49.946 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[fb750f5f-67e8-4171-8351-be61209e4e41]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:42:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d2742327d\x2d2338\x2d460e\x2d952e\x2d6446bba2b03f.mount: Deactivated successfully.
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.342 2 DEBUG nova.virt.libvirt.vif [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,compute_id=2,config_drive='True',created_at=2025-10-08T16:40:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadStabilizationStrategy-server-1914151859',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadstabilizationstrategy-server-1914151',id=28,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:40:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a137143db2f84a9f89a9bfc5d20558d0',ramdisk_id='',reservation_id='r-a1uf3mhk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',clean_attempts='1',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadStabilizationStrategy-817748354',owner_user_name='tempest-TestExecuteWorkloadStabilizationStrategy-817748354-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:42:10Z,user_data=None,user_id='aa1d2ca0056143d982599cc1b9f8587d',uuid=e1327151-bae2-40dd-a12d-90799e91c86d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "83237a56-7017-4670-921f-7758d20aaae4", "address": "fa:16:3e:3d:cd:c4", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83237a56-70", "ovs_interfaceid": "83237a56-7017-4670-921f-7758d20aaae4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.342 2 DEBUG nova.network.os_vif_util [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Converting VIF {"id": "83237a56-7017-4670-921f-7758d20aaae4", "address": "fa:16:3e:3d:cd:c4", "network": {"id": "2742327d-2338-460e-952e-6446bba2b03f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadStabilizationStrategy-119308643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a424ccdfcc0c4fc2ac4c67ed7d4c2afe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83237a56-70", "ovs_interfaceid": "83237a56-7017-4670-921f-7758d20aaae4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.344 2 DEBUG nova.network.os_vif_util [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3d:cd:c4,bridge_name='br-int',has_traffic_filtering=True,id=83237a56-7017-4670-921f-7758d20aaae4,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83237a56-70') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.344 2 DEBUG os_vif [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:cd:c4,bridge_name='br-int',has_traffic_filtering=True,id=83237a56-7017-4670-921f-7758d20aaae4,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83237a56-70') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.346 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap83237a56-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.352 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=50d1a112-f0fd-41b0-b317-a0af9c7d7432) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.356 2 INFO os_vif [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:cd:c4,bridge_name='br-int',has_traffic_filtering=True,id=83237a56-7017-4670-921f-7758d20aaae4,network=Network(2742327d-2338-460e-952e-6446bba2b03f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83237a56-70')
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.357 2 INFO nova.virt.libvirt.driver [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Deleting instance files /var/lib/nova/instances/e1327151-bae2-40dd-a12d-90799e91c86d_del
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.357 2 INFO nova.virt.libvirt.driver [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Deletion of /var/lib/nova/instances/e1327151-bae2-40dd-a12d-90799e91c86d_del complete
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.375 2 DEBUG nova.compute.manager [req-3847288c-289f-4a5d-b21f-afe1a4501650 req-ce235fe5-8f61-42a7-9adf-2e91c2edff2f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Received event network-vif-unplugged-83237a56-7017-4670-921f-7758d20aaae4 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.375 2 DEBUG oslo_concurrency.lockutils [req-3847288c-289f-4a5d-b21f-afe1a4501650 req-ce235fe5-8f61-42a7-9adf-2e91c2edff2f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "e1327151-bae2-40dd-a12d-90799e91c86d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.376 2 DEBUG oslo_concurrency.lockutils [req-3847288c-289f-4a5d-b21f-afe1a4501650 req-ce235fe5-8f61-42a7-9adf-2e91c2edff2f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "e1327151-bae2-40dd-a12d-90799e91c86d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.376 2 DEBUG oslo_concurrency.lockutils [req-3847288c-289f-4a5d-b21f-afe1a4501650 req-ce235fe5-8f61-42a7-9adf-2e91c2edff2f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "e1327151-bae2-40dd-a12d-90799e91c86d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.376 2 DEBUG nova.compute.manager [req-3847288c-289f-4a5d-b21f-afe1a4501650 req-ce235fe5-8f61-42a7-9adf-2e91c2edff2f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] No waiting events found dispatching network-vif-unplugged-83237a56-7017-4670-921f-7758d20aaae4 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.376 2 DEBUG nova.compute.manager [req-3847288c-289f-4a5d-b21f-afe1a4501650 req-ce235fe5-8f61-42a7-9adf-2e91c2edff2f c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Received event network-vif-unplugged-83237a56-7017-4670-921f-7758d20aaae4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.868 2 INFO nova.compute.manager [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Took 1.30 seconds to destroy the instance on the hypervisor.
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.869 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.869 2 DEBUG nova.compute.manager [-] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.869 2 DEBUG nova.network.neutron [-] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:42:50 compute-0 nova_compute[117413]: 2025-10-08 16:42:50.870 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:51 compute-0 nova_compute[117413]: 2025-10-08 16:42:51.340 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:42:51 compute-0 nova_compute[117413]: 2025-10-08 16:42:51.698 2 DEBUG nova.compute.manager [req-fe999919-360c-483d-835b-15d68a52cf5b req-080699fe-e9c6-4ae9-aaa2-26afc3e323d5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Received event network-vif-deleted-83237a56-7017-4670-921f-7758d20aaae4 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:42:51 compute-0 nova_compute[117413]: 2025-10-08 16:42:51.698 2 INFO nova.compute.manager [req-fe999919-360c-483d-835b-15d68a52cf5b req-080699fe-e9c6-4ae9-aaa2-26afc3e323d5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Neutron deleted interface 83237a56-7017-4670-921f-7758d20aaae4; detaching it from the instance and deleting it from the info cache
Oct 08 16:42:51 compute-0 nova_compute[117413]: 2025-10-08 16:42:51.699 2 DEBUG nova.network.neutron [req-fe999919-360c-483d-835b-15d68a52cf5b req-080699fe-e9c6-4ae9-aaa2-26afc3e323d5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:42:51 compute-0 nova_compute[117413]: 2025-10-08 16:42:51.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:52 compute-0 nova_compute[117413]: 2025-10-08 16:42:52.152 2 DEBUG nova.network.neutron [-] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:42:52 compute-0 nova_compute[117413]: 2025-10-08 16:42:52.206 2 DEBUG nova.compute.manager [req-fe999919-360c-483d-835b-15d68a52cf5b req-080699fe-e9c6-4ae9-aaa2-26afc3e323d5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Detach interface failed, port_id=83237a56-7017-4670-921f-7758d20aaae4, reason: Instance e1327151-bae2-40dd-a12d-90799e91c86d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 08 16:42:52 compute-0 nova_compute[117413]: 2025-10-08 16:42:52.432 2 DEBUG nova.compute.manager [req-dbf95f5e-629f-4be8-934c-398c76f850ca req-c511a388-3eed-4d33-aa99-8398b87b4ef0 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Received event network-vif-unplugged-83237a56-7017-4670-921f-7758d20aaae4 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:42:52 compute-0 nova_compute[117413]: 2025-10-08 16:42:52.433 2 DEBUG oslo_concurrency.lockutils [req-dbf95f5e-629f-4be8-934c-398c76f850ca req-c511a388-3eed-4d33-aa99-8398b87b4ef0 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "e1327151-bae2-40dd-a12d-90799e91c86d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:42:52 compute-0 nova_compute[117413]: 2025-10-08 16:42:52.433 2 DEBUG oslo_concurrency.lockutils [req-dbf95f5e-629f-4be8-934c-398c76f850ca req-c511a388-3eed-4d33-aa99-8398b87b4ef0 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "e1327151-bae2-40dd-a12d-90799e91c86d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:42:52 compute-0 nova_compute[117413]: 2025-10-08 16:42:52.433 2 DEBUG oslo_concurrency.lockutils [req-dbf95f5e-629f-4be8-934c-398c76f850ca req-c511a388-3eed-4d33-aa99-8398b87b4ef0 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "e1327151-bae2-40dd-a12d-90799e91c86d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:42:52 compute-0 nova_compute[117413]: 2025-10-08 16:42:52.433 2 DEBUG nova.compute.manager [req-dbf95f5e-629f-4be8-934c-398c76f850ca req-c511a388-3eed-4d33-aa99-8398b87b4ef0 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] No waiting events found dispatching network-vif-unplugged-83237a56-7017-4670-921f-7758d20aaae4 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:42:52 compute-0 nova_compute[117413]: 2025-10-08 16:42:52.433 2 DEBUG nova.compute.manager [req-dbf95f5e-629f-4be8-934c-398c76f850ca req-c511a388-3eed-4d33-aa99-8398b87b4ef0 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Received event network-vif-unplugged-83237a56-7017-4670-921f-7758d20aaae4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:42:52 compute-0 podman[153130]: 2025-10-08 16:42:52.448210579 +0000 UTC m=+0.055764883 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 08 16:42:52 compute-0 nova_compute[117413]: 2025-10-08 16:42:52.658 2 INFO nova.compute.manager [-] [instance: e1327151-bae2-40dd-a12d-90799e91c86d] Took 1.79 seconds to deallocate network for instance.
Oct 08 16:42:53 compute-0 nova_compute[117413]: 2025-10-08 16:42:53.178 2 DEBUG oslo_concurrency.lockutils [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:42:53 compute-0 nova_compute[117413]: 2025-10-08 16:42:53.179 2 DEBUG oslo_concurrency.lockutils [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:42:53 compute-0 nova_compute[117413]: 2025-10-08 16:42:53.230 2 DEBUG nova.compute.provider_tree [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:42:53 compute-0 nova_compute[117413]: 2025-10-08 16:42:53.737 2 DEBUG nova.scheduler.client.report [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:42:54 compute-0 nova_compute[117413]: 2025-10-08 16:42:54.247 2 DEBUG oslo_concurrency.lockutils [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.068s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:42:54 compute-0 nova_compute[117413]: 2025-10-08 16:42:54.274 2 INFO nova.scheduler.client.report [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Deleted allocations for instance e1327151-bae2-40dd-a12d-90799e91c86d
Oct 08 16:42:54 compute-0 podman[153151]: 2025-10-08 16:42:54.465992847 +0000 UTC m=+0.071515706 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 08 16:42:55 compute-0 nova_compute[117413]: 2025-10-08 16:42:55.302 2 DEBUG oslo_concurrency.lockutils [None req-9e9a43ca-8563-464a-91f4-e77e03fa25ab aa1d2ca0056143d982599cc1b9f8587d a137143db2f84a9f89a9bfc5d20558d0 - - default default] Lock "e1327151-bae2-40dd-a12d-90799e91c86d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.266s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:42:55 compute-0 nova_compute[117413]: 2025-10-08 16:42:55.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:56 compute-0 nova_compute[117413]: 2025-10-08 16:42:56.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:59.311 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:42:59 compute-0 nova_compute[117413]: 2025-10-08 16:42:59.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:42:59 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:42:59.312 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:42:59 compute-0 podman[153171]: 2025-10-08 16:42:59.47148586 +0000 UTC m=+0.070599160 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:42:59 compute-0 podman[153172]: 2025-10-08 16:42:59.53203641 +0000 UTC m=+0.116653063 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251007)
Oct 08 16:42:59 compute-0 podman[127881]: time="2025-10-08T16:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:42:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:42:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3035 "" "Go-http-client/1.1"
Oct 08 16:43:00 compute-0 nova_compute[117413]: 2025-10-08 16:43:00.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:01 compute-0 openstack_network_exporter[130039]: ERROR   16:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:43:01 compute-0 openstack_network_exporter[130039]: ERROR   16:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:43:01 compute-0 openstack_network_exporter[130039]: ERROR   16:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:43:01 compute-0 openstack_network_exporter[130039]: ERROR   16:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:43:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:43:01 compute-0 openstack_network_exporter[130039]: ERROR   16:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:43:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:43:01 compute-0 nova_compute[117413]: 2025-10-08 16:43:01.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:01 compute-0 nova_compute[117413]: 2025-10-08 16:43:01.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:04 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:43:04.314 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:43:05 compute-0 nova_compute[117413]: 2025-10-08 16:43:05.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:06 compute-0 nova_compute[117413]: 2025-10-08 16:43:06.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:10 compute-0 nova_compute[117413]: 2025-10-08 16:43:10.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:10 compute-0 podman[153219]: 2025-10-08 16:43:10.491103281 +0000 UTC m=+0.084620543 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 08 16:43:11 compute-0 nova_compute[117413]: 2025-10-08 16:43:11.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:43:12.102 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:a0:2e 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e4f7c384fd8a490f85ee6827269829c0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556febef-7d7e-4cc6-af5d-a844b7512e41, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a63b94a5-36df-4884-a4f9-6965418ea72c) old=Port_Binding(mac=['fa:16:3e:1a:a0:2e'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e4f7c384fd8a490f85ee6827269829c0', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:43:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:43:12.103 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a63b94a5-36df-4884-a4f9-6965418ea72c in datapath c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3 updated
Oct 08 16:43:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:43:12.104 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:43:12 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:43:12.105 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[acdb1612-9eda-4d98-96d5-01b4a9904117]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:43:14 compute-0 nova_compute[117413]: 2025-10-08 16:43:14.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:43:14 compute-0 nova_compute[117413]: 2025-10-08 16:43:14.881 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:43:14 compute-0 nova_compute[117413]: 2025-10-08 16:43:14.883 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:43:14 compute-0 nova_compute[117413]: 2025-10-08 16:43:14.883 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:43:14 compute-0 nova_compute[117413]: 2025-10-08 16:43:14.884 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:43:15 compute-0 nova_compute[117413]: 2025-10-08 16:43:15.042 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:43:15 compute-0 nova_compute[117413]: 2025-10-08 16:43:15.043 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:43:15 compute-0 nova_compute[117413]: 2025-10-08 16:43:15.069 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.026s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:43:15 compute-0 nova_compute[117413]: 2025-10-08 16:43:15.070 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6184MB free_disk=73.24961471557617GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:43:15 compute-0 nova_compute[117413]: 2025-10-08 16:43:15.070 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:43:15 compute-0 nova_compute[117413]: 2025-10-08 16:43:15.070 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:43:15 compute-0 nova_compute[117413]: 2025-10-08 16:43:15.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:16 compute-0 nova_compute[117413]: 2025-10-08 16:43:16.144 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:43:16 compute-0 nova_compute[117413]: 2025-10-08 16:43:16.145 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:43:15 up 51 min,  0 user,  load average: 0.10, 0.14, 0.17\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:43:16 compute-0 nova_compute[117413]: 2025-10-08 16:43:16.187 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:43:16 compute-0 podman[153242]: 2025-10-08 16:43:16.489269751 +0000 UTC m=+0.090834321 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6)
Oct 08 16:43:16 compute-0 nova_compute[117413]: 2025-10-08 16:43:16.697 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:43:16 compute-0 nova_compute[117413]: 2025-10-08 16:43:16.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:17 compute-0 nova_compute[117413]: 2025-10-08 16:43:17.208 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:43:17 compute-0 nova_compute[117413]: 2025-10-08 16:43:17.209 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.139s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:43:18 compute-0 nova_compute[117413]: 2025-10-08 16:43:18.210 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:43:18 compute-0 nova_compute[117413]: 2025-10-08 16:43:18.358 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:43:20 compute-0 nova_compute[117413]: 2025-10-08 16:43:20.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:43:20 compute-0 nova_compute[117413]: 2025-10-08 16:43:20.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:21 compute-0 nova_compute[117413]: 2025-10-08 16:43:21.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:43:21 compute-0 nova_compute[117413]: 2025-10-08 16:43:21.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:43:21 compute-0 nova_compute[117413]: 2025-10-08 16:43:21.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:43:21 compute-0 nova_compute[117413]: 2025-10-08 16:43:21.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:43:22.621 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:4b:d6 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-572580d0-7986-4d68-816c-b279ad3ddd38', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-572580d0-7986-4d68-816c-b279ad3ddd38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc10ca4f587446c896aeb3ac8d6a1fea', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3dafd92a-51ef-4af0-b296-9feab004ad4a, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=e014ce00-0d98-4e05-9ab8-776d531d482f) old=Port_Binding(mac=['fa:16:3e:a8:4b:d6'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-572580d0-7986-4d68-816c-b279ad3ddd38', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-572580d0-7986-4d68-816c-b279ad3ddd38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc10ca4f587446c896aeb3ac8d6a1fea', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:43:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:43:22.622 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port e014ce00-0d98-4e05-9ab8-776d531d482f in datapath 572580d0-7986-4d68-816c-b279ad3ddd38 updated
Oct 08 16:43:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:43:22.623 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 572580d0-7986-4d68-816c-b279ad3ddd38, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:43:22 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:43:22.624 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[2f29f86c-0f63-45ab-ac88-b62bce2e21ab]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:43:23 compute-0 podman[153265]: 2025-10-08 16:43:23.473244862 +0000 UTC m=+0.073049670 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 08 16:43:25 compute-0 nova_compute[117413]: 2025-10-08 16:43:25.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:43:25 compute-0 nova_compute[117413]: 2025-10-08 16:43:25.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:25 compute-0 podman[153286]: 2025-10-08 16:43:25.468924457 +0000 UTC m=+0.069410485 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007)
Oct 08 16:43:26 compute-0 nova_compute[117413]: 2025-10-08 16:43:26.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:29 compute-0 nova_compute[117413]: 2025-10-08 16:43:29.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:43:29 compute-0 podman[127881]: time="2025-10-08T16:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:43:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:43:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3032 "" "Go-http-client/1.1"
Oct 08 16:43:30 compute-0 nova_compute[117413]: 2025-10-08 16:43:30.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:30 compute-0 podman[153305]: 2025-10-08 16:43:30.468990032 +0000 UTC m=+0.071746882 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 08 16:43:30 compute-0 podman[153306]: 2025-10-08 16:43:30.513655595 +0000 UTC m=+0.109806616 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:43:31 compute-0 openstack_network_exporter[130039]: ERROR   16:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:43:31 compute-0 openstack_network_exporter[130039]: ERROR   16:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:43:31 compute-0 openstack_network_exporter[130039]: ERROR   16:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:43:31 compute-0 openstack_network_exporter[130039]: ERROR   16:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:43:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:43:31 compute-0 openstack_network_exporter[130039]: ERROR   16:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:43:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:43:31 compute-0 nova_compute[117413]: 2025-10-08 16:43:31.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:35 compute-0 nova_compute[117413]: 2025-10-08 16:43:35.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:36 compute-0 ovn_controller[19768]: 2025-10-08T16:43:36Z|00261|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Oct 08 16:43:36 compute-0 nova_compute[117413]: 2025-10-08 16:43:36.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:40 compute-0 nova_compute[117413]: 2025-10-08 16:43:40.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:41 compute-0 podman[153353]: 2025-10-08 16:43:41.455814722 +0000 UTC m=+0.058712298 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=multipathd, io.buildah.version=1.41.4, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:43:41 compute-0 nova_compute[117413]: 2025-10-08 16:43:41.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:43:41.937 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:43:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:43:41.938 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:43:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:43:41.938 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:43:43 compute-0 nova_compute[117413]: 2025-10-08 16:43:43.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:43:43 compute-0 nova_compute[117413]: 2025-10-08 16:43:43.363 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:43:43 compute-0 nova_compute[117413]: 2025-10-08 16:43:43.364 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:43:43 compute-0 nova_compute[117413]: 2025-10-08 16:43:43.364 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:43:43 compute-0 nova_compute[117413]: 2025-10-08 16:43:43.365 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:43:43 compute-0 nova_compute[117413]: 2025-10-08 16:43:43.365 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:43:43 compute-0 nova_compute[117413]: 2025-10-08 16:43:43.366 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:43:44 compute-0 nova_compute[117413]: 2025-10-08 16:43:44.381 2 DEBUG nova.virt.libvirt.imagecache [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.12/site-packages/nova/virt/libvirt/imagecache.py:314
Oct 08 16:43:44 compute-0 nova_compute[117413]: 2025-10-08 16:43:44.381 2 WARNING nova.virt.libvirt.imagecache [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Unknown base file: /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61
Oct 08 16:43:44 compute-0 nova_compute[117413]: 2025-10-08 16:43:44.381 2 INFO nova.virt.libvirt.imagecache [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Removable base files: /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61
Oct 08 16:43:44 compute-0 nova_compute[117413]: 2025-10-08 16:43:44.382 2 INFO nova.virt.libvirt.imagecache [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61
Oct 08 16:43:44 compute-0 nova_compute[117413]: 2025-10-08 16:43:44.382 2 DEBUG nova.virt.libvirt.imagecache [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.12/site-packages/nova/virt/libvirt/imagecache.py:350
Oct 08 16:43:44 compute-0 nova_compute[117413]: 2025-10-08 16:43:44.382 2 DEBUG nova.virt.libvirt.imagecache [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.12/site-packages/nova/virt/libvirt/imagecache.py:299
Oct 08 16:43:44 compute-0 nova_compute[117413]: 2025-10-08 16:43:44.382 2 DEBUG nova.virt.libvirt.imagecache [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.12/site-packages/nova/virt/libvirt/imagecache.py:284
Oct 08 16:43:45 compute-0 nova_compute[117413]: 2025-10-08 16:43:45.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:46 compute-0 nova_compute[117413]: 2025-10-08 16:43:46.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:47 compute-0 podman[153374]: 2025-10-08 16:43:47.462425885 +0000 UTC m=+0.065903224 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, version=9.6, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 08 16:43:49 compute-0 nova_compute[117413]: 2025-10-08 16:43:49.392 2 DEBUG oslo_concurrency.lockutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "42a86e16-7b85-41fe-be3c-61a97043d11c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:43:49 compute-0 nova_compute[117413]: 2025-10-08 16:43:49.393 2 DEBUG oslo_concurrency.lockutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "42a86e16-7b85-41fe-be3c-61a97043d11c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:43:49 compute-0 nova_compute[117413]: 2025-10-08 16:43:49.898 2 DEBUG nova.compute.manager [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 08 16:43:50 compute-0 nova_compute[117413]: 2025-10-08 16:43:50.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:50 compute-0 nova_compute[117413]: 2025-10-08 16:43:50.444 2 DEBUG oslo_concurrency.lockutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:43:50 compute-0 nova_compute[117413]: 2025-10-08 16:43:50.444 2 DEBUG oslo_concurrency.lockutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:43:50 compute-0 nova_compute[117413]: 2025-10-08 16:43:50.454 2 DEBUG nova.virt.hardware [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 08 16:43:50 compute-0 nova_compute[117413]: 2025-10-08 16:43:50.454 2 INFO nova.compute.claims [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Claim successful on node compute-0.ctlplane.example.com
Oct 08 16:43:51 compute-0 nova_compute[117413]: 2025-10-08 16:43:51.518 2 DEBUG nova.compute.provider_tree [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:43:51 compute-0 nova_compute[117413]: 2025-10-08 16:43:51.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:52 compute-0 nova_compute[117413]: 2025-10-08 16:43:52.025 2 DEBUG nova.scheduler.client.report [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:43:52 compute-0 nova_compute[117413]: 2025-10-08 16:43:52.536 2 DEBUG oslo_concurrency.lockutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.091s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:43:52 compute-0 nova_compute[117413]: 2025-10-08 16:43:52.537 2 DEBUG nova.compute.manager [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 08 16:43:53 compute-0 nova_compute[117413]: 2025-10-08 16:43:53.051 2 DEBUG nova.compute.manager [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 08 16:43:53 compute-0 nova_compute[117413]: 2025-10-08 16:43:53.052 2 DEBUG nova.network.neutron [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 08 16:43:53 compute-0 nova_compute[117413]: 2025-10-08 16:43:53.052 2 WARNING neutronclient.v2_0.client [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:43:53 compute-0 nova_compute[117413]: 2025-10-08 16:43:53.053 2 WARNING neutronclient.v2_0.client [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:43:53 compute-0 nova_compute[117413]: 2025-10-08 16:43:53.561 2 INFO nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 08 16:43:53 compute-0 nova_compute[117413]: 2025-10-08 16:43:53.729 2 DEBUG nova.network.neutron [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Successfully created port: 1fb61428-1e3c-45d8-83a4-4616134641ec _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 08 16:43:54 compute-0 nova_compute[117413]: 2025-10-08 16:43:54.069 2 DEBUG nova.compute.manager [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 08 16:43:54 compute-0 podman[153397]: 2025-10-08 16:43:54.475837293 +0000 UTC m=+0.073347668 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251007, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 08 16:43:54 compute-0 nova_compute[117413]: 2025-10-08 16:43:54.613 2 DEBUG nova.network.neutron [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Successfully updated port: 1fb61428-1e3c-45d8-83a4-4616134641ec _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 08 16:43:54 compute-0 nova_compute[117413]: 2025-10-08 16:43:54.691 2 DEBUG nova.compute.manager [req-9dc32778-d77e-48af-ac49-7a1b795b8bd5 req-5d8907b3-b0ba-4803-841f-f7e6533297e9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Received event network-changed-1fb61428-1e3c-45d8-83a4-4616134641ec external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:43:54 compute-0 nova_compute[117413]: 2025-10-08 16:43:54.692 2 DEBUG nova.compute.manager [req-9dc32778-d77e-48af-ac49-7a1b795b8bd5 req-5d8907b3-b0ba-4803-841f-f7e6533297e9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Refreshing instance network info cache due to event network-changed-1fb61428-1e3c-45d8-83a4-4616134641ec. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 08 16:43:54 compute-0 nova_compute[117413]: 2025-10-08 16:43:54.692 2 DEBUG oslo_concurrency.lockutils [req-9dc32778-d77e-48af-ac49-7a1b795b8bd5 req-5d8907b3-b0ba-4803-841f-f7e6533297e9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-42a86e16-7b85-41fe-be3c-61a97043d11c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:43:54 compute-0 nova_compute[117413]: 2025-10-08 16:43:54.692 2 DEBUG oslo_concurrency.lockutils [req-9dc32778-d77e-48af-ac49-7a1b795b8bd5 req-5d8907b3-b0ba-4803-841f-f7e6533297e9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-42a86e16-7b85-41fe-be3c-61a97043d11c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:43:54 compute-0 nova_compute[117413]: 2025-10-08 16:43:54.693 2 DEBUG nova.network.neutron [req-9dc32778-d77e-48af-ac49-7a1b795b8bd5 req-5d8907b3-b0ba-4803-841f-f7e6533297e9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Refreshing network info cache for port 1fb61428-1e3c-45d8-83a4-4616134641ec _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.088 2 DEBUG nova.compute.manager [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.090 2 DEBUG nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.090 2 INFO nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Creating image(s)
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.091 2 DEBUG oslo_concurrency.lockutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "/var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.091 2 DEBUG oslo_concurrency.lockutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "/var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.092 2 DEBUG oslo_concurrency.lockutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "/var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.093 2 DEBUG oslo_utils.imageutils.format_inspector [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.096 2 DEBUG oslo_utils.imageutils.format_inspector [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.098 2 DEBUG oslo_concurrency.processutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.119 2 DEBUG oslo_concurrency.lockutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "refresh_cache-42a86e16-7b85-41fe-be3c-61a97043d11c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.185 2 DEBUG oslo_concurrency.processutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.186 2 DEBUG oslo_concurrency.lockutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.186 2 DEBUG oslo_concurrency.lockutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.187 2 DEBUG oslo_utils.imageutils.format_inspector [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.190 2 DEBUG oslo_utils.imageutils.format_inspector [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.190 2 DEBUG oslo_concurrency.processutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.201 2 WARNING neutronclient.v2_0.client [req-9dc32778-d77e-48af-ac49-7a1b795b8bd5 req-5d8907b3-b0ba-4803-841f-f7e6533297e9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.280 2 DEBUG oslo_concurrency.processutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.281 2 DEBUG oslo_concurrency.processutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.330 2 DEBUG nova.network.neutron [req-9dc32778-d77e-48af-ac49-7a1b795b8bd5 req-5d8907b3-b0ba-4803-841f-f7e6533297e9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.341 2 DEBUG oslo_concurrency.processutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c/disk 1073741824" returned: 0 in 0.060s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.342 2 DEBUG oslo_concurrency.lockutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.156s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.343 2 DEBUG oslo_concurrency.processutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.441 2 DEBUG oslo_concurrency.processutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.442 2 DEBUG nova.virt.disk.api [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Checking if we can resize image /var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.443 2 DEBUG oslo_concurrency.processutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.476 2 DEBUG nova.network.neutron [req-9dc32778-d77e-48af-ac49-7a1b795b8bd5 req-5d8907b3-b0ba-4803-841f-f7e6533297e9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.522 2 DEBUG oslo_concurrency.processutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.523 2 DEBUG nova.virt.disk.api [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Cannot resize image /var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.524 2 DEBUG nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.525 2 DEBUG nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Ensure instance console log exists: /var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.526 2 DEBUG oslo_concurrency.lockutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.527 2 DEBUG oslo_concurrency.lockutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.527 2 DEBUG oslo_concurrency.lockutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.985 2 DEBUG oslo_concurrency.lockutils [req-9dc32778-d77e-48af-ac49-7a1b795b8bd5 req-5d8907b3-b0ba-4803-841f-f7e6533297e9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-42a86e16-7b85-41fe-be3c-61a97043d11c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.987 2 DEBUG oslo_concurrency.lockutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquired lock "refresh_cache-42a86e16-7b85-41fe-be3c-61a97043d11c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:43:55 compute-0 nova_compute[117413]: 2025-10-08 16:43:55.987 2 DEBUG nova.network.neutron [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:43:56 compute-0 podman[153433]: 2025-10-08 16:43:56.495530003 +0000 UTC m=+0.093750055 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct 08 16:43:56 compute-0 nova_compute[117413]: 2025-10-08 16:43:56.628 2 DEBUG nova.network.neutron [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:43:56 compute-0 nova_compute[117413]: 2025-10-08 16:43:56.814 2 WARNING neutronclient.v2_0.client [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:43:56 compute-0 nova_compute[117413]: 2025-10-08 16:43:56.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:56 compute-0 nova_compute[117413]: 2025-10-08 16:43:56.993 2 DEBUG nova.network.neutron [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Updating instance_info_cache with network_info: [{"id": "1fb61428-1e3c-45d8-83a4-4616134641ec", "address": "fa:16:3e:8d:5a:bb", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb61428-1e", "ovs_interfaceid": "1fb61428-1e3c-45d8-83a4-4616134641ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.501 2 DEBUG oslo_concurrency.lockutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Releasing lock "refresh_cache-42a86e16-7b85-41fe-be3c-61a97043d11c" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.502 2 DEBUG nova.compute.manager [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Instance network_info: |[{"id": "1fb61428-1e3c-45d8-83a4-4616134641ec", "address": "fa:16:3e:8d:5a:bb", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb61428-1e", "ovs_interfaceid": "1fb61428-1e3c-45d8-83a4-4616134641ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.504 2 DEBUG nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Start _get_guest_xml network_info=[{"id": "1fb61428-1e3c-45d8-83a4-4616134641ec", "address": "fa:16:3e:8d:5a:bb", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb61428-1e", "ovs_interfaceid": "1fb61428-1e3c-45d8-83a4-4616134641ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'boot_index': 0, 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': '44390e9d-4b05-4916-9ba9-97b19c79ef43'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.509 2 WARNING nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.511 2 DEBUG nova.virt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='44390e9d-4b05-4916-9ba9-97b19c79ef43', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteZoneMigrationStrategy-server-229913144', uuid='42a86e16-7b85-41fe-be3c-61a97043d11c'), owner=OwnerMeta(userid='7560d8247c7549c9a1a5774b411e593f', username='tempest-TestExecuteZoneMigrationStrategy-1978933030-project-admin', projectid='cc10ca4f587446c896aeb3ac8d6a1fea', projectname='tempest-TestExecuteZoneMigrationStrategy-1978933030'), image=ImageMeta(id='44390e9d-4b05-4916-9ba9-97b19c79ef43', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='43cd5d45-bd07-4889-a671-dd23291090c1', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "1fb61428-1e3c-45d8-83a4-4616134641ec", "address": "fa:16:3e:8d:5a:bb", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb61428-1e", "ovs_interfaceid": "1fb61428-1e3c-45d8-83a4-4616134641ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008114656.23cad1d.el10', creation_time=1759941837.5109951) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.517 2 DEBUG nova.virt.libvirt.host [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.518 2 DEBUG nova.virt.libvirt.host [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.521 2 DEBUG nova.virt.libvirt.host [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.522 2 DEBUG nova.virt.libvirt.host [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.523 2 DEBUG nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.523 2 DEBUG nova.virt.hardware [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-08T16:08:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='43cd5d45-bd07-4889-a671-dd23291090c1',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.524 2 DEBUG nova.virt.hardware [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.524 2 DEBUG nova.virt.hardware [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.524 2 DEBUG nova.virt.hardware [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.524 2 DEBUG nova.virt.hardware [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.525 2 DEBUG nova.virt.hardware [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.525 2 DEBUG nova.virt.hardware [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.525 2 DEBUG nova.virt.hardware [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.526 2 DEBUG nova.virt.hardware [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.526 2 DEBUG nova.virt.hardware [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.526 2 DEBUG nova.virt.hardware [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.532 2 DEBUG nova.virt.libvirt.vif [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:43:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-229913144',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-229913144',id=31,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cc10ca4f587446c896aeb3ac8d6a1fea',ramdisk_id='',reservation_id='r-z66gwgih',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-1978933030',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-1978933030-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:43:54Z,user_data=None,user_id='7560d8247c7549c9a1a5774b411e593f',uuid=42a86e16-7b85-41fe-be3c-61a97043d11c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1fb61428-1e3c-45d8-83a4-4616134641ec", "address": "fa:16:3e:8d:5a:bb", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb61428-1e", "ovs_interfaceid": "1fb61428-1e3c-45d8-83a4-4616134641ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.532 2 DEBUG nova.network.os_vif_util [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Converting VIF {"id": "1fb61428-1e3c-45d8-83a4-4616134641ec", "address": "fa:16:3e:8d:5a:bb", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb61428-1e", "ovs_interfaceid": "1fb61428-1e3c-45d8-83a4-4616134641ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.533 2 DEBUG nova.network.os_vif_util [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:5a:bb,bridge_name='br-int',has_traffic_filtering=True,id=1fb61428-1e3c-45d8-83a4-4616134641ec,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fb61428-1e') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:43:57 compute-0 nova_compute[117413]: 2025-10-08 16:43:57.534 2 DEBUG nova.objects.instance [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lazy-loading 'pci_devices' on Instance uuid 42a86e16-7b85-41fe-be3c-61a97043d11c obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.043 2 DEBUG nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] End _get_guest_xml xml=<domain type="kvm">
Oct 08 16:43:58 compute-0 nova_compute[117413]:   <uuid>42a86e16-7b85-41fe-be3c-61a97043d11c</uuid>
Oct 08 16:43:58 compute-0 nova_compute[117413]:   <name>instance-0000001f</name>
Oct 08 16:43:58 compute-0 nova_compute[117413]:   <memory>131072</memory>
Oct 08 16:43:58 compute-0 nova_compute[117413]:   <vcpu>1</vcpu>
Oct 08 16:43:58 compute-0 nova_compute[117413]:   <metadata>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <nova:package version="32.1.0-0.20251008114656.23cad1d.el10"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <nova:name>tempest-TestExecuteZoneMigrationStrategy-server-229913144</nova:name>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <nova:creationTime>2025-10-08 16:43:57</nova:creationTime>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <nova:flavor name="m1.nano" id="43cd5d45-bd07-4889-a671-dd23291090c1">
Oct 08 16:43:58 compute-0 nova_compute[117413]:         <nova:memory>128</nova:memory>
Oct 08 16:43:58 compute-0 nova_compute[117413]:         <nova:disk>1</nova:disk>
Oct 08 16:43:58 compute-0 nova_compute[117413]:         <nova:swap>0</nova:swap>
Oct 08 16:43:58 compute-0 nova_compute[117413]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 16:43:58 compute-0 nova_compute[117413]:         <nova:vcpus>1</nova:vcpus>
Oct 08 16:43:58 compute-0 nova_compute[117413]:         <nova:extraSpecs>
Oct 08 16:43:58 compute-0 nova_compute[117413]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 08 16:43:58 compute-0 nova_compute[117413]:         </nova:extraSpecs>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       </nova:flavor>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <nova:image uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43">
Oct 08 16:43:58 compute-0 nova_compute[117413]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 08 16:43:58 compute-0 nova_compute[117413]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 08 16:43:58 compute-0 nova_compute[117413]:         <nova:minDisk>1</nova:minDisk>
Oct 08 16:43:58 compute-0 nova_compute[117413]:         <nova:minRam>0</nova:minRam>
Oct 08 16:43:58 compute-0 nova_compute[117413]:         <nova:properties>
Oct 08 16:43:58 compute-0 nova_compute[117413]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 08 16:43:58 compute-0 nova_compute[117413]:         </nova:properties>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       </nova:image>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <nova:owner>
Oct 08 16:43:58 compute-0 nova_compute[117413]:         <nova:user uuid="7560d8247c7549c9a1a5774b411e593f">tempest-TestExecuteZoneMigrationStrategy-1978933030-project-admin</nova:user>
Oct 08 16:43:58 compute-0 nova_compute[117413]:         <nova:project uuid="cc10ca4f587446c896aeb3ac8d6a1fea">tempest-TestExecuteZoneMigrationStrategy-1978933030</nova:project>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       </nova:owner>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <nova:root type="image" uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <nova:ports>
Oct 08 16:43:58 compute-0 nova_compute[117413]:         <nova:port uuid="1fb61428-1e3c-45d8-83a4-4616134641ec">
Oct 08 16:43:58 compute-0 nova_compute[117413]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:         </nova:port>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       </nova:ports>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     </nova:instance>
Oct 08 16:43:58 compute-0 nova_compute[117413]:   </metadata>
Oct 08 16:43:58 compute-0 nova_compute[117413]:   <sysinfo type="smbios">
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <system>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <entry name="manufacturer">RDO</entry>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <entry name="product">OpenStack Compute</entry>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <entry name="version">32.1.0-0.20251008114656.23cad1d.el10</entry>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <entry name="serial">42a86e16-7b85-41fe-be3c-61a97043d11c</entry>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <entry name="uuid">42a86e16-7b85-41fe-be3c-61a97043d11c</entry>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <entry name="family">Virtual Machine</entry>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     </system>
Oct 08 16:43:58 compute-0 nova_compute[117413]:   </sysinfo>
Oct 08 16:43:58 compute-0 nova_compute[117413]:   <os>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <boot dev="hd"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <smbios mode="sysinfo"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:   </os>
Oct 08 16:43:58 compute-0 nova_compute[117413]:   <features>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <acpi/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <apic/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <vmcoreinfo/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:   </features>
Oct 08 16:43:58 compute-0 nova_compute[117413]:   <clock offset="utc">
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <timer name="hpet" present="no"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:   </clock>
Oct 08 16:43:58 compute-0 nova_compute[117413]:   <cpu mode="host-model" match="exact">
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <topology sockets="1" cores="1" threads="1"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:43:58 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <disk type="file" device="disk">
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c/disk"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <target dev="vda" bus="virtio"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <disk type="file" device="cdrom">
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <driver name="qemu" type="raw" cache="none"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c/disk.config"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <target dev="sda" bus="sata"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <interface type="ethernet">
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <mac address="fa:16:3e:8d:5a:bb"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <driver name="vhost" rx_queue_size="512"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <mtu size="1442"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <target dev="tap1fb61428-1e"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     </interface>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <serial type="pty">
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c/console.log" append="off"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     </serial>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <video>
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     </video>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <input type="tablet" bus="usb"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <rng model="virtio">
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <backend model="random">/dev/urandom</backend>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <controller type="usb" index="0"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 08 16:43:58 compute-0 nova_compute[117413]:       <stats period="10"/>
Oct 08 16:43:58 compute-0 nova_compute[117413]:     </memballoon>
Oct 08 16:43:58 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:43:58 compute-0 nova_compute[117413]: </domain>
Oct 08 16:43:58 compute-0 nova_compute[117413]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.045 2 DEBUG nova.compute.manager [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Preparing to wait for external event network-vif-plugged-1fb61428-1e3c-45d8-83a4-4616134641ec prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.046 2 DEBUG oslo_concurrency.lockutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "42a86e16-7b85-41fe-be3c-61a97043d11c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.046 2 DEBUG oslo_concurrency.lockutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "42a86e16-7b85-41fe-be3c-61a97043d11c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.046 2 DEBUG oslo_concurrency.lockutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "42a86e16-7b85-41fe-be3c-61a97043d11c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.047 2 DEBUG nova.virt.libvirt.vif [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:43:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-229913144',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-229913144',id=31,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cc10ca4f587446c896aeb3ac8d6a1fea',ramdisk_id='',reservation_id='r-z66gwgih',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-1978933030',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-1978933030-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:43:54Z,user_data=None,user_id='7560d8247c7549c9a1a5774b411e593f',uuid=42a86e16-7b85-41fe-be3c-61a97043d11c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1fb61428-1e3c-45d8-83a4-4616134641ec", "address": "fa:16:3e:8d:5a:bb", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb61428-1e", "ovs_interfaceid": "1fb61428-1e3c-45d8-83a4-4616134641ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.047 2 DEBUG nova.network.os_vif_util [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Converting VIF {"id": "1fb61428-1e3c-45d8-83a4-4616134641ec", "address": "fa:16:3e:8d:5a:bb", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb61428-1e", "ovs_interfaceid": "1fb61428-1e3c-45d8-83a4-4616134641ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.048 2 DEBUG nova.network.os_vif_util [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:5a:bb,bridge_name='br-int',has_traffic_filtering=True,id=1fb61428-1e3c-45d8-83a4-4616134641ec,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fb61428-1e') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.048 2 DEBUG os_vif [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:5a:bb,bridge_name='br-int',has_traffic_filtering=True,id=1fb61428-1e3c-45d8-83a4-4616134641ec,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fb61428-1e') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.049 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.049 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': '8530e651-1da9-5888-8b4a-13182abbdc99', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.055 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1fb61428-1e, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.055 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap1fb61428-1e, col_values=(('qos', UUID('909356e9-1260-4487-81ec-940fd093afdc')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.055 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap1fb61428-1e, col_values=(('external_ids', {'iface-id': '1fb61428-1e3c-45d8-83a4-4616134641ec', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:5a:bb', 'vm-uuid': '42a86e16-7b85-41fe-be3c-61a97043d11c'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:43:58 compute-0 NetworkManager[1034]: <info>  [1759941838.0575] manager: (tap1fb61428-1e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:43:58 compute-0 nova_compute[117413]: 2025-10-08 16:43:58.072 2 INFO os_vif [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:5a:bb,bridge_name='br-int',has_traffic_filtering=True,id=1fb61428-1e3c-45d8-83a4-4616134641ec,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fb61428-1e')
Oct 08 16:43:59 compute-0 nova_compute[117413]: 2025-10-08 16:43:59.612 2 DEBUG nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:43:59 compute-0 nova_compute[117413]: 2025-10-08 16:43:59.612 2 DEBUG nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:43:59 compute-0 nova_compute[117413]: 2025-10-08 16:43:59.613 2 DEBUG nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] No VIF found with MAC fa:16:3e:8d:5a:bb, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 08 16:43:59 compute-0 nova_compute[117413]: 2025-10-08 16:43:59.613 2 INFO nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Using config drive
Oct 08 16:43:59 compute-0 podman[127881]: time="2025-10-08T16:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:43:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:43:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3032 "" "Go-http-client/1.1"
Oct 08 16:44:00 compute-0 nova_compute[117413]: 2025-10-08 16:44:00.128 2 WARNING neutronclient.v2_0.client [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:44:00 compute-0 nova_compute[117413]: 2025-10-08 16:44:00.856 2 INFO nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Creating config drive at /var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c/disk.config
Oct 08 16:44:00 compute-0 nova_compute[117413]: 2025-10-08 16:44:00.866 2 DEBUG oslo_concurrency.processutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmpe3db_5ql execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:44:01 compute-0 nova_compute[117413]: 2025-10-08 16:44:01.018 2 DEBUG oslo_concurrency.processutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmpe3db_5ql" returned: 0 in 0.151s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:44:01 compute-0 kernel: tap1fb61428-1e: entered promiscuous mode
Oct 08 16:44:01 compute-0 nova_compute[117413]: 2025-10-08 16:44:01.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:01 compute-0 NetworkManager[1034]: <info>  [1759941841.1391] manager: (tap1fb61428-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Oct 08 16:44:01 compute-0 nova_compute[117413]: 2025-10-08 16:44:01.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:01 compute-0 ovn_controller[19768]: 2025-10-08T16:44:01Z|00262|binding|INFO|Claiming lport 1fb61428-1e3c-45d8-83a4-4616134641ec for this chassis.
Oct 08 16:44:01 compute-0 ovn_controller[19768]: 2025-10-08T16:44:01Z|00263|binding|INFO|1fb61428-1e3c-45d8-83a4-4616134641ec: Claiming fa:16:3e:8d:5a:bb 10.100.0.4
Oct 08 16:44:01 compute-0 nova_compute[117413]: 2025-10-08 16:44:01.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.159 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:5a:bb 10.100.0.4'], port_security=['fa:16:3e:8d:5a:bb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '42a86e16-7b85-41fe-be3c-61a97043d11c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc10ca4f587446c896aeb3ac8d6a1fea', 'neutron:revision_number': '4', 'neutron:security_group_ids': '748d2b7e-80c6-40c6-bf04-afd53bb7b30a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556febef-7d7e-4cc6-af5d-a844b7512e41, chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=1fb61428-1e3c-45d8-83a4-4616134641ec) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.166 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 1fb61428-1e3c-45d8-83a4-4616134641ec in datapath c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3 bound to our chassis
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.167 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3
Oct 08 16:44:01 compute-0 systemd-machined[77548]: New machine qemu-23-instance-0000001f.
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.183 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[beebaafd-e8d8-4f79-bd9d-01b099a4f068]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.184 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc6d7d7c0-f1 in ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.191 139805 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc6d7d7c0-f0 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.191 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[b54089f1-1383-4945-a913-e6e6f981b16b]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.192 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[16b0fb05-f418-4cca-a815-ec00d5f42f14]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.207 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[8b5edca3-44c1-4bb9-95c4-30d5afc8f98f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:01 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-0000001f.
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.221 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[3a3049f6-c86d-47f2-8f72-381ee3844843]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:01 compute-0 nova_compute[117413]: 2025-10-08 16:44:01.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:01 compute-0 ovn_controller[19768]: 2025-10-08T16:44:01Z|00264|binding|INFO|Setting lport 1fb61428-1e3c-45d8-83a4-4616134641ec ovn-installed in OVS
Oct 08 16:44:01 compute-0 ovn_controller[19768]: 2025-10-08T16:44:01Z|00265|binding|INFO|Setting lport 1fb61428-1e3c-45d8-83a4-4616134641ec up in Southbound
Oct 08 16:44:01 compute-0 nova_compute[117413]: 2025-10-08 16:44:01.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:01 compute-0 podman[153466]: 2025-10-08 16:44:01.237746049 +0000 UTC m=+0.109149985 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 08 16:44:01 compute-0 systemd-udevd[153525]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:44:01 compute-0 NetworkManager[1034]: <info>  [1759941841.2620] device (tap1fb61428-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:44:01 compute-0 NetworkManager[1034]: <info>  [1759941841.2630] device (tap1fb61428-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.263 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[d9391b5a-720e-4de0-acfa-ffce1a19d786]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:01 compute-0 podman[153467]: 2025-10-08 16:44:01.268826802 +0000 UTC m=+0.138193599 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251007)
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.268 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[92676660-e317-4ad0-9ecc-f028485fb7ea]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:01 compute-0 NetworkManager[1034]: <info>  [1759941841.2700] manager: (tapc6d7d7c0-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/94)
Oct 08 16:44:01 compute-0 systemd-udevd[153534]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.309 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[4dea5cb7-feaa-4b8e-9fa8-5317d8c9a5f9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.311 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[3f02a247-e110-4a85-8d99-701961045069]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:01 compute-0 NetworkManager[1034]: <info>  [1759941841.3424] device (tapc6d7d7c0-f0): carrier: link connected
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.351 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[26b1625e-3a1d-4510-9246-f9b9a5e5f463]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.372 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[4285c820-0af8-469e-a35e-6c61ebd70ece]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6d7d7c0-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:a0:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 313003, 'reachable_time': 36908, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 153558, 'error': None, 'target': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.395 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[525d3975-318a-4233-b896-5bb12d128742]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:a02e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 313003, 'tstamp': 313003}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 153559, 'error': None, 'target': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:01 compute-0 nova_compute[117413]: 2025-10-08 16:44:01.408 2 DEBUG nova.compute.manager [req-ac041bc9-b6f0-47ad-aecd-e7321ef0e1ee req-eacbb81f-2a95-4f73-b065-6e761c56c14e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Received event network-vif-plugged-1fb61428-1e3c-45d8-83a4-4616134641ec external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:44:01 compute-0 nova_compute[117413]: 2025-10-08 16:44:01.409 2 DEBUG oslo_concurrency.lockutils [req-ac041bc9-b6f0-47ad-aecd-e7321ef0e1ee req-eacbb81f-2a95-4f73-b065-6e761c56c14e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "42a86e16-7b85-41fe-be3c-61a97043d11c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:44:01 compute-0 nova_compute[117413]: 2025-10-08 16:44:01.409 2 DEBUG oslo_concurrency.lockutils [req-ac041bc9-b6f0-47ad-aecd-e7321ef0e1ee req-eacbb81f-2a95-4f73-b065-6e761c56c14e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "42a86e16-7b85-41fe-be3c-61a97043d11c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:44:01 compute-0 nova_compute[117413]: 2025-10-08 16:44:01.410 2 DEBUG oslo_concurrency.lockutils [req-ac041bc9-b6f0-47ad-aecd-e7321ef0e1ee req-eacbb81f-2a95-4f73-b065-6e761c56c14e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "42a86e16-7b85-41fe-be3c-61a97043d11c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:44:01 compute-0 nova_compute[117413]: 2025-10-08 16:44:01.410 2 DEBUG nova.compute.manager [req-ac041bc9-b6f0-47ad-aecd-e7321ef0e1ee req-eacbb81f-2a95-4f73-b065-6e761c56c14e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Processing event network-vif-plugged-1fb61428-1e3c-45d8-83a4-4616134641ec _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 08 16:44:01 compute-0 openstack_network_exporter[130039]: ERROR   16:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:44:01 compute-0 openstack_network_exporter[130039]: ERROR   16:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:44:01 compute-0 openstack_network_exporter[130039]: ERROR   16:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:44:01 compute-0 openstack_network_exporter[130039]: ERROR   16:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:44:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:44:01 compute-0 openstack_network_exporter[130039]: ERROR   16:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:44:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.427 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[f6dbcaf0-a23b-46b6-8877-f743dee4c220]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6d7d7c0-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:a0:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 313003, 'reachable_time': 36908, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 153560, 'error': None, 'target': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.472 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[064cc290-525f-4803-a02b-be67fb206baf]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.547 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e71e1eb4-58ec-49c6-bd14-bf37b5b0818f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.548 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6d7d7c0-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.548 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.548 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6d7d7c0-f0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:44:01 compute-0 NetworkManager[1034]: <info>  [1759941841.5515] manager: (tapc6d7d7c0-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Oct 08 16:44:01 compute-0 kernel: tapc6d7d7c0-f0: entered promiscuous mode
Oct 08 16:44:01 compute-0 nova_compute[117413]: 2025-10-08 16:44:01.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.554 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6d7d7c0-f0, col_values=(('external_ids', {'iface-id': 'a63b94a5-36df-4884-a4f9-6965418ea72c'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:44:01 compute-0 nova_compute[117413]: 2025-10-08 16:44:01.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:01 compute-0 nova_compute[117413]: 2025-10-08 16:44:01.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:01 compute-0 ovn_controller[19768]: 2025-10-08T16:44:01Z|00266|binding|INFO|Releasing lport a63b94a5-36df-4884-a4f9-6965418ea72c from this chassis (sb_readonly=0)
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.557 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[d96b597b-5b94-4fd0-aff3-5e0e08c62397]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.558 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.558 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.558 28633 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.558 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.558 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[5f23768b-1b3a-45bc-b8c3-c1f22b73e97e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.559 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.559 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c64c03-e365-4321-ab22-d33954ce440b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.559 28633 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: global
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     log         /dev/log local0 debug
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     log-tag     haproxy-metadata-proxy-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     user        root
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     group       root
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     maxconn     1024
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     pidfile     /var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     daemon
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: defaults
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     log global
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     mode http
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     option httplog
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     option dontlognull
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     option http-server-close
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     option forwardfor
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     retries                 3
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     timeout http-request    30s
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     timeout connect         30s
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     timeout client          32s
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     timeout server          32s
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     timeout http-keep-alive 30s
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: listen listener
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     bind 169.254.169.254:80
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     server metadata /var/lib/neutron/metadata_proxy
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:     http-request add-header X-OVN-Network-ID c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 08 16:44:01 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:01.560 28633 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'env', 'PROCESS_TAG=haproxy-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 08 16:44:01 compute-0 nova_compute[117413]: 2025-10-08 16:44:01.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:01 compute-0 anacron[90088]: Job `cron.weekly' started
Oct 08 16:44:01 compute-0 anacron[90088]: Job `cron.weekly' terminated
Oct 08 16:44:01 compute-0 nova_compute[117413]: 2025-10-08 16:44:01.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:02 compute-0 podman[153601]: 2025-10-08 16:44:02.007237293 +0000 UTC m=+0.075988203 container create 30f7ac62d09a9fbaaafbb8d1f3099cb6d82abf5d3887e7f8f33df713896e12f4 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:44:02 compute-0 systemd[1]: Started libpod-conmon-30f7ac62d09a9fbaaafbb8d1f3099cb6d82abf5d3887e7f8f33df713896e12f4.scope.
Oct 08 16:44:02 compute-0 podman[153601]: 2025-10-08 16:44:01.97125665 +0000 UTC m=+0.040007610 image pull 1b705be0a2473f9551d4f3571c1e8fc1b0bd84e013684239de53078e70a4b6e3 38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 08 16:44:02 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:44:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be427bddb8601e8d4d587e48f59a146007e859734c10aef24b332c6818bb6eb7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 08 16:44:02 compute-0 podman[153601]: 2025-10-08 16:44:02.114131342 +0000 UTC m=+0.182882272 container init 30f7ac62d09a9fbaaafbb8d1f3099cb6d82abf5d3887e7f8f33df713896e12f4 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 08 16:44:02 compute-0 podman[153601]: 2025-10-08 16:44:02.124677535 +0000 UTC m=+0.193428445 container start 30f7ac62d09a9fbaaafbb8d1f3099cb6d82abf5d3887e7f8f33df713896e12f4 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2)
Oct 08 16:44:02 compute-0 neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3[153616]: [NOTICE]   (153620) : New worker (153622) forked
Oct 08 16:44:02 compute-0 neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3[153616]: [NOTICE]   (153620) : Loading success.
Oct 08 16:44:02 compute-0 nova_compute[117413]: 2025-10-08 16:44:02.194 2 DEBUG nova.compute.manager [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 08 16:44:02 compute-0 nova_compute[117413]: 2025-10-08 16:44:02.199 2 DEBUG nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 08 16:44:02 compute-0 nova_compute[117413]: 2025-10-08 16:44:02.203 2 INFO nova.virt.libvirt.driver [-] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Instance spawned successfully.
Oct 08 16:44:02 compute-0 nova_compute[117413]: 2025-10-08 16:44:02.203 2 DEBUG nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 08 16:44:02 compute-0 nova_compute[117413]: 2025-10-08 16:44:02.721 2 DEBUG nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:44:02 compute-0 nova_compute[117413]: 2025-10-08 16:44:02.721 2 DEBUG nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:44:02 compute-0 nova_compute[117413]: 2025-10-08 16:44:02.722 2 DEBUG nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:44:02 compute-0 nova_compute[117413]: 2025-10-08 16:44:02.723 2 DEBUG nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:44:02 compute-0 nova_compute[117413]: 2025-10-08 16:44:02.724 2 DEBUG nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:44:02 compute-0 nova_compute[117413]: 2025-10-08 16:44:02.724 2 DEBUG nova.virt.libvirt.driver [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:44:03 compute-0 nova_compute[117413]: 2025-10-08 16:44:03.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:03 compute-0 nova_compute[117413]: 2025-10-08 16:44:03.237 2 INFO nova.compute.manager [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Took 8.15 seconds to spawn the instance on the hypervisor.
Oct 08 16:44:03 compute-0 nova_compute[117413]: 2025-10-08 16:44:03.238 2 DEBUG nova.compute.manager [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:44:03 compute-0 nova_compute[117413]: 2025-10-08 16:44:03.461 2 DEBUG nova.compute.manager [req-3e4e5151-fc6d-410d-894a-f90443ef8927 req-75cb77e1-55bc-4332-8c46-d9e7c01268c3 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Received event network-vif-plugged-1fb61428-1e3c-45d8-83a4-4616134641ec external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:44:03 compute-0 nova_compute[117413]: 2025-10-08 16:44:03.461 2 DEBUG oslo_concurrency.lockutils [req-3e4e5151-fc6d-410d-894a-f90443ef8927 req-75cb77e1-55bc-4332-8c46-d9e7c01268c3 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "42a86e16-7b85-41fe-be3c-61a97043d11c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:44:03 compute-0 nova_compute[117413]: 2025-10-08 16:44:03.462 2 DEBUG oslo_concurrency.lockutils [req-3e4e5151-fc6d-410d-894a-f90443ef8927 req-75cb77e1-55bc-4332-8c46-d9e7c01268c3 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "42a86e16-7b85-41fe-be3c-61a97043d11c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:44:03 compute-0 nova_compute[117413]: 2025-10-08 16:44:03.462 2 DEBUG oslo_concurrency.lockutils [req-3e4e5151-fc6d-410d-894a-f90443ef8927 req-75cb77e1-55bc-4332-8c46-d9e7c01268c3 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "42a86e16-7b85-41fe-be3c-61a97043d11c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:44:03 compute-0 nova_compute[117413]: 2025-10-08 16:44:03.463 2 DEBUG nova.compute.manager [req-3e4e5151-fc6d-410d-894a-f90443ef8927 req-75cb77e1-55bc-4332-8c46-d9e7c01268c3 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] No waiting events found dispatching network-vif-plugged-1fb61428-1e3c-45d8-83a4-4616134641ec pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:44:03 compute-0 nova_compute[117413]: 2025-10-08 16:44:03.463 2 WARNING nova.compute.manager [req-3e4e5151-fc6d-410d-894a-f90443ef8927 req-75cb77e1-55bc-4332-8c46-d9e7c01268c3 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Received unexpected event network-vif-plugged-1fb61428-1e3c-45d8-83a4-4616134641ec for instance with vm_state active and task_state None.
Oct 08 16:44:03 compute-0 nova_compute[117413]: 2025-10-08 16:44:03.776 2 INFO nova.compute.manager [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Took 13.37 seconds to build instance.
Oct 08 16:44:04 compute-0 nova_compute[117413]: 2025-10-08 16:44:04.282 2 DEBUG oslo_concurrency.lockutils [None req-f09b1fc7-c099-4c32-97c9-dea9fd855534 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "42a86e16-7b85-41fe-be3c-61a97043d11c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.889s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:44:06 compute-0 nova_compute[117413]: 2025-10-08 16:44:06.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:08 compute-0 nova_compute[117413]: 2025-10-08 16:44:08.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:11 compute-0 nova_compute[117413]: 2025-10-08 16:44:11.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:12 compute-0 podman[153634]: 2025-10-08 16:44:12.494331062 +0000 UTC m=+0.085059403 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:44:13 compute-0 nova_compute[117413]: 2025-10-08 16:44:13.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:14 compute-0 ovn_controller[19768]: 2025-10-08T16:44:14Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8d:5a:bb 10.100.0.4
Oct 08 16:44:14 compute-0 ovn_controller[19768]: 2025-10-08T16:44:14Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8d:5a:bb 10.100.0.4
Oct 08 16:44:16 compute-0 nova_compute[117413]: 2025-10-08 16:44:16.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:17 compute-0 nova_compute[117413]: 2025-10-08 16:44:17.346 2 DEBUG nova.virt.libvirt.driver [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Creating tmpfile /var/lib/nova/instances/tmp3xceb90b to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10944
Oct 08 16:44:17 compute-0 nova_compute[117413]: 2025-10-08 16:44:17.347 2 WARNING neutronclient.v2_0.client [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:44:17 compute-0 nova_compute[117413]: 2025-10-08 16:44:17.361 2 DEBUG nova.compute.manager [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3xceb90b',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.12/site-packages/nova/compute/manager.py:9086
Oct 08 16:44:17 compute-0 nova_compute[117413]: 2025-10-08 16:44:17.382 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:44:17 compute-0 nova_compute[117413]: 2025-10-08 16:44:17.383 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:44:17 compute-0 nova_compute[117413]: 2025-10-08 16:44:17.902 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:44:17 compute-0 nova_compute[117413]: 2025-10-08 16:44:17.903 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:44:17 compute-0 nova_compute[117413]: 2025-10-08 16:44:17.903 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:44:17 compute-0 nova_compute[117413]: 2025-10-08 16:44:17.903 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:44:18 compute-0 nova_compute[117413]: 2025-10-08 16:44:18.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:18 compute-0 podman[153671]: 2025-10-08 16:44:18.525427824 +0000 UTC m=+0.103262326 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.component=ubi9-minimal-container, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, version=9.6, io.buildah.version=1.33.7)
Oct 08 16:44:18 compute-0 nova_compute[117413]: 2025-10-08 16:44:18.939 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:44:19 compute-0 nova_compute[117413]: 2025-10-08 16:44:19.051 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c/disk --force-share --output=json" returned: 0 in 0.112s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:44:19 compute-0 nova_compute[117413]: 2025-10-08 16:44:19.052 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:44:19 compute-0 nova_compute[117413]: 2025-10-08 16:44:19.127 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:44:19 compute-0 nova_compute[117413]: 2025-10-08 16:44:19.309 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:44:19 compute-0 nova_compute[117413]: 2025-10-08 16:44:19.312 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:44:19 compute-0 nova_compute[117413]: 2025-10-08 16:44:19.350 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.038s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:44:19 compute-0 nova_compute[117413]: 2025-10-08 16:44:19.352 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5956MB free_disk=73.22089767456055GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:44:19 compute-0 nova_compute[117413]: 2025-10-08 16:44:19.352 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:44:19 compute-0 nova_compute[117413]: 2025-10-08 16:44:19.353 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:44:19 compute-0 nova_compute[117413]: 2025-10-08 16:44:19.409 2 WARNING neutronclient.v2_0.client [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:44:20 compute-0 nova_compute[117413]: 2025-10-08 16:44:20.405 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance 42a86e16-7b85-41fe-be3c-61a97043d11c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:44:20 compute-0 nova_compute[117413]: 2025-10-08 16:44:20.911 2 WARNING nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance 655c2e8b-0116-4ce0-a0dd-f74c4d848039 has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.
Oct 08 16:44:20 compute-0 nova_compute[117413]: 2025-10-08 16:44:20.912 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:44:20 compute-0 nova_compute[117413]: 2025-10-08 16:44:20.912 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:44:19 up 52 min,  0 user,  load average: 0.23, 0.16, 0.17\n', 'num_instances': '1', 'num_vm_active': '1', 'num_task_None': '1', 'num_os_type_None': '1', 'num_proj_cc10ca4f587446c896aeb3ac8d6a1fea': '1', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:44:20 compute-0 nova_compute[117413]: 2025-10-08 16:44:20.969 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:44:21 compute-0 nova_compute[117413]: 2025-10-08 16:44:21.478 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:44:21 compute-0 nova_compute[117413]: 2025-10-08 16:44:21.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:21 compute-0 nova_compute[117413]: 2025-10-08 16:44:21.990 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:44:21 compute-0 nova_compute[117413]: 2025-10-08 16:44:21.991 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.638s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:44:23 compute-0 nova_compute[117413]: 2025-10-08 16:44:23.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:23 compute-0 nova_compute[117413]: 2025-10-08 16:44:23.227 2 DEBUG nova.compute.manager [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3xceb90b',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='655c2e8b-0116-4ce0-a0dd-f74c4d848039',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9311
Oct 08 16:44:23 compute-0 nova_compute[117413]: 2025-10-08 16:44:23.966 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:44:23 compute-0 nova_compute[117413]: 2025-10-08 16:44:23.967 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:44:23 compute-0 nova_compute[117413]: 2025-10-08 16:44:23.967 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:44:23 compute-0 nova_compute[117413]: 2025-10-08 16:44:23.967 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:44:23 compute-0 nova_compute[117413]: 2025-10-08 16:44:23.967 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:44:24 compute-0 nova_compute[117413]: 2025-10-08 16:44:24.244 2 DEBUG oslo_concurrency.lockutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-655c2e8b-0116-4ce0-a0dd-f74c4d848039" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:44:24 compute-0 nova_compute[117413]: 2025-10-08 16:44:24.245 2 DEBUG oslo_concurrency.lockutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-655c2e8b-0116-4ce0-a0dd-f74c4d848039" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:44:24 compute-0 nova_compute[117413]: 2025-10-08 16:44:24.245 2 DEBUG nova.network.neutron [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:44:24 compute-0 nova_compute[117413]: 2025-10-08 16:44:24.752 2 WARNING neutronclient.v2_0.client [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:44:25 compute-0 nova_compute[117413]: 2025-10-08 16:44:25.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:44:25 compute-0 podman[153699]: 2025-10-08 16:44:25.454438648 +0000 UTC m=+0.062874157 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 08 16:44:25 compute-0 nova_compute[117413]: 2025-10-08 16:44:25.562 2 WARNING neutronclient.v2_0.client [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:44:25 compute-0 nova_compute[117413]: 2025-10-08 16:44:25.719 2 DEBUG nova.network.neutron [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Updating instance_info_cache with network_info: [{"id": "2fb37d2a-51a6-4921-9339-bb6623b76913", "address": "fa:16:3e:3d:73:6b", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb37d2a-51", "ovs_interfaceid": "2fb37d2a-51a6-4921-9339-bb6623b76913", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:44:26 compute-0 nova_compute[117413]: 2025-10-08 16:44:26.226 2 DEBUG oslo_concurrency.lockutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-655c2e8b-0116-4ce0-a0dd-f74c4d848039" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:44:26 compute-0 nova_compute[117413]: 2025-10-08 16:44:26.243 2 DEBUG nova.virt.libvirt.driver [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3xceb90b',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='655c2e8b-0116-4ce0-a0dd-f74c4d848039',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11737
Oct 08 16:44:26 compute-0 nova_compute[117413]: 2025-10-08 16:44:26.244 2 DEBUG nova.virt.libvirt.driver [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Creating instance directory: /var/lib/nova/instances/655c2e8b-0116-4ce0-a0dd-f74c4d848039 pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11750
Oct 08 16:44:26 compute-0 nova_compute[117413]: 2025-10-08 16:44:26.245 2 DEBUG nova.virt.libvirt.driver [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Creating disk.info with the contents: {'/var/lib/nova/instances/655c2e8b-0116-4ce0-a0dd-f74c4d848039/disk': 'qcow2', '/var/lib/nova/instances/655c2e8b-0116-4ce0-a0dd-f74c4d848039/disk.config': 'raw'} pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11764
Oct 08 16:44:26 compute-0 nova_compute[117413]: 2025-10-08 16:44:26.246 2 DEBUG nova.virt.libvirt.driver [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11774
Oct 08 16:44:26 compute-0 nova_compute[117413]: 2025-10-08 16:44:26.247 2 DEBUG nova.objects.instance [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 655c2e8b-0116-4ce0-a0dd-f74c4d848039 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:44:26 compute-0 podman[153719]: 2025-10-08 16:44:26.716523086 +0000 UTC m=+0.071066571 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 08 16:44:26 compute-0 nova_compute[117413]: 2025-10-08 16:44:26.756 2 DEBUG oslo_utils.imageutils.format_inspector [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:44:26 compute-0 nova_compute[117413]: 2025-10-08 16:44:26.760 2 DEBUG oslo_utils.imageutils.format_inspector [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:44:26 compute-0 nova_compute[117413]: 2025-10-08 16:44:26.762 2 DEBUG oslo_concurrency.processutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:44:26 compute-0 nova_compute[117413]: 2025-10-08 16:44:26.842 2 DEBUG oslo_concurrency.processutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:44:26 compute-0 nova_compute[117413]: 2025-10-08 16:44:26.844 2 DEBUG oslo_concurrency.lockutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:44:26 compute-0 nova_compute[117413]: 2025-10-08 16:44:26.845 2 DEBUG oslo_concurrency.lockutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:44:26 compute-0 nova_compute[117413]: 2025-10-08 16:44:26.846 2 DEBUG oslo_utils.imageutils.format_inspector [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:44:26 compute-0 nova_compute[117413]: 2025-10-08 16:44:26.852 2 DEBUG oslo_utils.imageutils.format_inspector [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:44:26 compute-0 nova_compute[117413]: 2025-10-08 16:44:26.853 2 DEBUG oslo_concurrency.processutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:44:26 compute-0 nova_compute[117413]: 2025-10-08 16:44:26.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:26 compute-0 nova_compute[117413]: 2025-10-08 16:44:26.926 2 DEBUG oslo_concurrency.processutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:44:26 compute-0 nova_compute[117413]: 2025-10-08 16:44:26.926 2 DEBUG oslo_concurrency.processutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/655c2e8b-0116-4ce0-a0dd-f74c4d848039/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:44:26 compute-0 nova_compute[117413]: 2025-10-08 16:44:26.998 2 DEBUG oslo_concurrency.processutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/655c2e8b-0116-4ce0-a0dd-f74c4d848039/disk 1073741824" returned: 0 in 0.072s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.000 2 DEBUG oslo_concurrency.lockutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.155s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.001 2 DEBUG oslo_concurrency.processutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.063 2 DEBUG oslo_concurrency.processutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.065 2 DEBUG nova.virt.disk.api [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Checking if we can resize image /var/lib/nova/instances/655c2e8b-0116-4ce0-a0dd-f74c4d848039/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.066 2 DEBUG oslo_concurrency.processutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/655c2e8b-0116-4ce0-a0dd-f74c4d848039/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.128 2 DEBUG oslo_concurrency.processutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/655c2e8b-0116-4ce0-a0dd-f74c4d848039/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.129 2 DEBUG nova.virt.disk.api [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Cannot resize image /var/lib/nova/instances/655c2e8b-0116-4ce0-a0dd-f74c4d848039/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.129 2 DEBUG nova.objects.instance [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'migration_context' on Instance uuid 655c2e8b-0116-4ce0-a0dd-f74c4d848039 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.637 2 DEBUG nova.objects.base [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Object Instance<655c2e8b-0116-4ce0-a0dd-f74c4d848039> lazy-loaded attributes: trusted_certs,migration_context wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.638 2 DEBUG oslo_concurrency.processutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/655c2e8b-0116-4ce0-a0dd-f74c4d848039/disk.config 497664 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.670 2 DEBUG oslo_concurrency.processutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/655c2e8b-0116-4ce0-a0dd-f74c4d848039/disk.config 497664" returned: 0 in 0.031s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.671 2 DEBUG nova.virt.libvirt.driver [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11704
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.674 2 DEBUG nova.virt.libvirt.vif [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-08T16:43:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-916982275',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-916982275',id=30,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:43:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='cc10ca4f587446c896aeb3ac8d6a1fea',ramdisk_id='',reservation_id='r-3s1s8pl2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-1978933030',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-1978933030-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:43:46Z,user_data=None,user_id='7560d8247c7549c9a1a5774b411e593f',uuid=655c2e8b-0116-4ce0-a0dd-f74c4d848039,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2fb37d2a-51a6-4921-9339-bb6623b76913", "address": "fa:16:3e:3d:73:6b", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2fb37d2a-51", "ovs_interfaceid": "2fb37d2a-51a6-4921-9339-bb6623b76913", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.674 2 DEBUG nova.network.os_vif_util [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converting VIF {"id": "2fb37d2a-51a6-4921-9339-bb6623b76913", "address": "fa:16:3e:3d:73:6b", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2fb37d2a-51", "ovs_interfaceid": "2fb37d2a-51a6-4921-9339-bb6623b76913", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.676 2 DEBUG nova.network.os_vif_util [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:73:6b,bridge_name='br-int',has_traffic_filtering=True,id=2fb37d2a-51a6-4921-9339-bb6623b76913,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fb37d2a-51') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.678 2 DEBUG os_vif [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:73:6b,bridge_name='br-int',has_traffic_filtering=True,id=2fb37d2a-51a6-4921-9339-bb6623b76913,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fb37d2a-51') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.680 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.680 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.682 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'dd0226fd-4aaf-5133-b5b3-6c9ef8cc2b7c', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.689 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fb37d2a-51, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.690 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap2fb37d2a-51, col_values=(('qos', UUID('88044c0f-4d33-4b32-a31a-8033793539b0')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.690 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap2fb37d2a-51, col_values=(('external_ids', {'iface-id': '2fb37d2a-51a6-4921-9339-bb6623b76913', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:73:6b', 'vm-uuid': '655c2e8b-0116-4ce0-a0dd-f74c4d848039'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:27 compute-0 NetworkManager[1034]: <info>  [1759941867.6934] manager: (tap2fb37d2a-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.700 2 INFO os_vif [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:73:6b,bridge_name='br-int',has_traffic_filtering=True,id=2fb37d2a-51a6-4921-9339-bb6623b76913,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fb37d2a-51')
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.701 2 DEBUG nova.virt.libvirt.driver [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11851
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.702 2 DEBUG nova.compute.manager [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3xceb90b',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='655c2e8b-0116-4ce0-a0dd-f74c4d848039',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9377
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.703 2 WARNING neutronclient.v2_0.client [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.777 2 WARNING neutronclient.v2_0.client [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:44:27 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:27.994 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:44:27 compute-0 nova_compute[117413]: 2025-10-08 16:44:27.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:27 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:27.995 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:44:28 compute-0 nova_compute[117413]: 2025-10-08 16:44:28.659 2 DEBUG nova.network.neutron [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Port 2fb37d2a-51a6-4921-9339-bb6623b76913 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.12/site-packages/nova/network/neutron.py:356
Oct 08 16:44:28 compute-0 nova_compute[117413]: 2025-10-08 16:44:28.676 2 DEBUG nova.compute.manager [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3xceb90b',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='655c2e8b-0116-4ce0-a0dd-f74c4d848039',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9443
Oct 08 16:44:29 compute-0 podman[127881]: time="2025-10-08T16:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:44:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:44:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3494 "" "Go-http-client/1.1"
Oct 08 16:44:30 compute-0 nova_compute[117413]: 2025-10-08 16:44:30.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:44:31 compute-0 openstack_network_exporter[130039]: ERROR   16:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:44:31 compute-0 openstack_network_exporter[130039]: ERROR   16:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:44:31 compute-0 openstack_network_exporter[130039]: ERROR   16:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:44:31 compute-0 openstack_network_exporter[130039]: ERROR   16:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:44:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:44:31 compute-0 openstack_network_exporter[130039]: ERROR   16:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:44:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:44:31 compute-0 podman[153759]: 2025-10-08 16:44:31.476740656 +0000 UTC m=+0.081487820 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:44:31 compute-0 ovn_controller[19768]: 2025-10-08T16:44:31Z|00267|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 08 16:44:31 compute-0 podman[153760]: 2025-10-08 16:44:31.573270588 +0000 UTC m=+0.166088100 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true)
Oct 08 16:44:31 compute-0 kernel: tap2fb37d2a-51: entered promiscuous mode
Oct 08 16:44:31 compute-0 NetworkManager[1034]: <info>  [1759941871.8359] manager: (tap2fb37d2a-51): new Tun device (/org/freedesktop/NetworkManager/Devices/97)
Oct 08 16:44:31 compute-0 ovn_controller[19768]: 2025-10-08T16:44:31Z|00268|binding|INFO|Claiming lport 2fb37d2a-51a6-4921-9339-bb6623b76913 for this additional chassis.
Oct 08 16:44:31 compute-0 nova_compute[117413]: 2025-10-08 16:44:31.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:31 compute-0 ovn_controller[19768]: 2025-10-08T16:44:31Z|00269|binding|INFO|2fb37d2a-51a6-4921-9339-bb6623b76913: Claiming fa:16:3e:3d:73:6b 10.100.0.11
Oct 08 16:44:31 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:31.848 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:73:6b 10.100.0.11'], port_security=['fa:16:3e:3d:73:6b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '655c2e8b-0116-4ce0-a0dd-f74c4d848039', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc10ca4f587446c896aeb3ac8d6a1fea', 'neutron:revision_number': '10', 'neutron:security_group_ids': '748d2b7e-80c6-40c6-bf04-afd53bb7b30a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556febef-7d7e-4cc6-af5d-a844b7512e41, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=2fb37d2a-51a6-4921-9339-bb6623b76913) old=Port_Binding(additional_chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:44:31 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:31.849 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 2fb37d2a-51a6-4921-9339-bb6623b76913 in datapath c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3 unbound from our chassis
Oct 08 16:44:31 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:31.851 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3
Oct 08 16:44:31 compute-0 ovn_controller[19768]: 2025-10-08T16:44:31Z|00270|binding|INFO|Setting lport 2fb37d2a-51a6-4921-9339-bb6623b76913 ovn-installed in OVS
Oct 08 16:44:31 compute-0 nova_compute[117413]: 2025-10-08 16:44:31.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:31 compute-0 nova_compute[117413]: 2025-10-08 16:44:31.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:31 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:31.869 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[3f59cd7b-b2af-4a39-8fef-f277ef122eee]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:31 compute-0 systemd-udevd[153819]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:44:31 compute-0 NetworkManager[1034]: <info>  [1759941871.8836] device (tap2fb37d2a-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:44:31 compute-0 nova_compute[117413]: 2025-10-08 16:44:31.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:31 compute-0 NetworkManager[1034]: <info>  [1759941871.8855] device (tap2fb37d2a-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:44:31 compute-0 systemd-machined[77548]: New machine qemu-24-instance-0000001e.
Oct 08 16:44:31 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:31.899 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[95f48eec-7de2-4d22-8cac-f6b88e5ccaf8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:31 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:31.902 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[af6b833c-5cce-41e4-984d-a3bb6d3248c5]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:31 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-0000001e.
Oct 08 16:44:31 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:31.922 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[5923066f-e6a4-4ed7-8d84-21401710685b]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:31 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:31.941 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[9390baaa-df8b-42a3-b79e-a70e6adc233b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6d7d7c0-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:a0:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 313003, 'reachable_time': 36908, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 153829, 'error': None, 'target': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:31 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:31.961 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[7071349a-400b-4baa-a474-b2f12ee2077f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc6d7d7c0-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 313019, 'tstamp': 313019}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 153832, 'error': None, 'target': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc6d7d7c0-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 313022, 'tstamp': 313022}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 153832, 'error': None, 'target': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:31 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:31.963 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6d7d7c0-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:44:31 compute-0 nova_compute[117413]: 2025-10-08 16:44:31.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:31 compute-0 nova_compute[117413]: 2025-10-08 16:44:31.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:31 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:31.966 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6d7d7c0-f0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:44:31 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:31.966 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:44:31 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:31.967 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6d7d7c0-f0, col_values=(('external_ids', {'iface-id': 'a63b94a5-36df-4884-a4f9-6965418ea72c'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:44:31 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:31.967 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:44:31 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:31.968 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[477392c7-bbf4-498c-852f-29967a5853d8]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:32 compute-0 nova_compute[117413]: 2025-10-08 16:44:32.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:36 compute-0 ovn_controller[19768]: 2025-10-08T16:44:36Z|00271|binding|INFO|Claiming lport 2fb37d2a-51a6-4921-9339-bb6623b76913 for this chassis.
Oct 08 16:44:36 compute-0 ovn_controller[19768]: 2025-10-08T16:44:36Z|00272|binding|INFO|2fb37d2a-51a6-4921-9339-bb6623b76913: Claiming fa:16:3e:3d:73:6b 10.100.0.11
Oct 08 16:44:36 compute-0 ovn_controller[19768]: 2025-10-08T16:44:36Z|00273|binding|INFO|Setting lport 2fb37d2a-51a6-4921-9339-bb6623b76913 up in Southbound
Oct 08 16:44:36 compute-0 nova_compute[117413]: 2025-10-08 16:44:36.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:37 compute-0 nova_compute[117413]: 2025-10-08 16:44:37.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:37.997 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:44:38 compute-0 nova_compute[117413]: 2025-10-08 16:44:38.460 2 INFO nova.compute.manager [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Post operation of migration started
Oct 08 16:44:38 compute-0 nova_compute[117413]: 2025-10-08 16:44:38.461 2 WARNING neutronclient.v2_0.client [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:44:38 compute-0 nova_compute[117413]: 2025-10-08 16:44:38.550 2 WARNING neutronclient.v2_0.client [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:44:38 compute-0 nova_compute[117413]: 2025-10-08 16:44:38.550 2 WARNING neutronclient.v2_0.client [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:44:38 compute-0 nova_compute[117413]: 2025-10-08 16:44:38.618 2 DEBUG oslo_concurrency.lockutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-655c2e8b-0116-4ce0-a0dd-f74c4d848039" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:44:38 compute-0 nova_compute[117413]: 2025-10-08 16:44:38.619 2 DEBUG oslo_concurrency.lockutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-655c2e8b-0116-4ce0-a0dd-f74c4d848039" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:44:38 compute-0 nova_compute[117413]: 2025-10-08 16:44:38.619 2 DEBUG nova.network.neutron [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:44:39 compute-0 nova_compute[117413]: 2025-10-08 16:44:39.125 2 WARNING neutronclient.v2_0.client [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:44:39 compute-0 nova_compute[117413]: 2025-10-08 16:44:39.357 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:44:39 compute-0 nova_compute[117413]: 2025-10-08 16:44:39.546 2 WARNING neutronclient.v2_0.client [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:44:39 compute-0 nova_compute[117413]: 2025-10-08 16:44:39.700 2 DEBUG nova.network.neutron [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Updating instance_info_cache with network_info: [{"id": "2fb37d2a-51a6-4921-9339-bb6623b76913", "address": "fa:16:3e:3d:73:6b", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb37d2a-51", "ovs_interfaceid": "2fb37d2a-51a6-4921-9339-bb6623b76913", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:44:40 compute-0 nova_compute[117413]: 2025-10-08 16:44:40.206 2 DEBUG oslo_concurrency.lockutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-655c2e8b-0116-4ce0-a0dd-f74c4d848039" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:44:40 compute-0 nova_compute[117413]: 2025-10-08 16:44:40.728 2 DEBUG oslo_concurrency.lockutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:44:40 compute-0 nova_compute[117413]: 2025-10-08 16:44:40.729 2 DEBUG oslo_concurrency.lockutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:44:40 compute-0 nova_compute[117413]: 2025-10-08 16:44:40.729 2 DEBUG oslo_concurrency.lockutils [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:44:40 compute-0 nova_compute[117413]: 2025-10-08 16:44:40.735 2 INFO nova.virt.libvirt.driver [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 08 16:44:40 compute-0 virtqemud[117740]: Domain id=24 name='instance-0000001e' uuid=655c2e8b-0116-4ce0-a0dd-f74c4d848039 is tainted: custom-monitor
Oct 08 16:44:41 compute-0 nova_compute[117413]: 2025-10-08 16:44:41.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:44:41 compute-0 nova_compute[117413]: 2025-10-08 16:44:41.744 2 INFO nova.virt.libvirt.driver [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 08 16:44:41 compute-0 nova_compute[117413]: 2025-10-08 16:44:41.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:41.939 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:44:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:41.940 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:44:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:41.940 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:44:42 compute-0 nova_compute[117413]: 2025-10-08 16:44:42.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:42 compute-0 nova_compute[117413]: 2025-10-08 16:44:42.751 2 INFO nova.virt.libvirt.driver [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 08 16:44:42 compute-0 nova_compute[117413]: 2025-10-08 16:44:42.756 2 DEBUG nova.compute.manager [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:44:43 compute-0 nova_compute[117413]: 2025-10-08 16:44:43.277 2 DEBUG nova.objects.instance [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Oct 08 16:44:43 compute-0 podman[153860]: 2025-10-08 16:44:43.51485238 +0000 UTC m=+0.107670843 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=multipathd, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd)
Oct 08 16:44:43 compute-0 nova_compute[117413]: 2025-10-08 16:44:43.871 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:44:43 compute-0 nova_compute[117413]: 2025-10-08 16:44:43.872 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Oct 08 16:44:44 compute-0 nova_compute[117413]: 2025-10-08 16:44:44.295 2 WARNING neutronclient.v2_0.client [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:44:44 compute-0 nova_compute[117413]: 2025-10-08 16:44:44.389 2 WARNING neutronclient.v2_0.client [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:44:44 compute-0 nova_compute[117413]: 2025-10-08 16:44:44.390 2 WARNING neutronclient.v2_0.client [None req-618b4278-a584-4059-b2c5-f6e4096ece3c ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:44:46 compute-0 nova_compute[117413]: 2025-10-08 16:44:46.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:46 compute-0 nova_compute[117413]: 2025-10-08 16:44:46.998 2 DEBUG oslo_concurrency.lockutils [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "42a86e16-7b85-41fe-be3c-61a97043d11c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:44:46 compute-0 nova_compute[117413]: 2025-10-08 16:44:46.998 2 DEBUG oslo_concurrency.lockutils [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "42a86e16-7b85-41fe-be3c-61a97043d11c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:44:46 compute-0 nova_compute[117413]: 2025-10-08 16:44:46.999 2 DEBUG oslo_concurrency.lockutils [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "42a86e16-7b85-41fe-be3c-61a97043d11c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:44:46 compute-0 nova_compute[117413]: 2025-10-08 16:44:46.999 2 DEBUG oslo_concurrency.lockutils [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "42a86e16-7b85-41fe-be3c-61a97043d11c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:44:47 compute-0 nova_compute[117413]: 2025-10-08 16:44:46.999 2 DEBUG oslo_concurrency.lockutils [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "42a86e16-7b85-41fe-be3c-61a97043d11c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:44:47 compute-0 nova_compute[117413]: 2025-10-08 16:44:47.014 2 INFO nova.compute.manager [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Terminating instance
Oct 08 16:44:47 compute-0 nova_compute[117413]: 2025-10-08 16:44:47.533 2 DEBUG nova.compute.manager [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:44:47 compute-0 kernel: tap1fb61428-1e (unregistering): left promiscuous mode
Oct 08 16:44:47 compute-0 NetworkManager[1034]: <info>  [1759941887.5605] device (tap1fb61428-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:44:47 compute-0 ovn_controller[19768]: 2025-10-08T16:44:47Z|00274|binding|INFO|Releasing lport 1fb61428-1e3c-45d8-83a4-4616134641ec from this chassis (sb_readonly=0)
Oct 08 16:44:47 compute-0 ovn_controller[19768]: 2025-10-08T16:44:47Z|00275|binding|INFO|Setting lport 1fb61428-1e3c-45d8-83a4-4616134641ec down in Southbound
Oct 08 16:44:47 compute-0 nova_compute[117413]: 2025-10-08 16:44:47.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:47 compute-0 ovn_controller[19768]: 2025-10-08T16:44:47Z|00276|binding|INFO|Removing iface tap1fb61428-1e ovn-installed in OVS
Oct 08 16:44:47 compute-0 nova_compute[117413]: 2025-10-08 16:44:47.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:47 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:47.589 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:5a:bb 10.100.0.4'], port_security=['fa:16:3e:8d:5a:bb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '42a86e16-7b85-41fe-be3c-61a97043d11c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc10ca4f587446c896aeb3ac8d6a1fea', 'neutron:revision_number': '5', 'neutron:security_group_ids': '748d2b7e-80c6-40c6-bf04-afd53bb7b30a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556febef-7d7e-4cc6-af5d-a844b7512e41, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=1fb61428-1e3c-45d8-83a4-4616134641ec) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:44:47 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:47.590 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 1fb61428-1e3c-45d8-83a4-4616134641ec in datapath c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3 unbound from our chassis
Oct 08 16:44:47 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:47.591 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3
Oct 08 16:44:47 compute-0 nova_compute[117413]: 2025-10-08 16:44:47.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:47 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:47.617 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[179699f9-9db4-41cd-be43-495a8f5c6660]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:47 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Oct 08 16:44:47 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000001f.scope: Consumed 13.607s CPU time.
Oct 08 16:44:47 compute-0 systemd-machined[77548]: Machine qemu-23-instance-0000001f terminated.
Oct 08 16:44:47 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:47.665 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[37a3bb73-b499-47f9-89ad-6c513d396b9d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:47 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:47.668 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[28eb92bc-16a2-4ad8-8d3b-bed3fbb9a188]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:47 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:47.720 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[80fc6b33-5c0e-44fd-96ac-011cfe1aa2df]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:47 compute-0 nova_compute[117413]: 2025-10-08 16:44:47.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:47 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:47.755 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e6bed4ac-15d7-4d69-8c61-67e965c95c4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6d7d7c0-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:a0:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 313003, 'reachable_time': 36908, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 153891, 'error': None, 'target': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:47 compute-0 nova_compute[117413]: 2025-10-08 16:44:47.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:47 compute-0 nova_compute[117413]: 2025-10-08 16:44:47.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:47 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:47.788 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[a47b36dd-134e-4c18-84b3-3ec53a5907f8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc6d7d7c0-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 313019, 'tstamp': 313019}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 153895, 'error': None, 'target': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc6d7d7c0-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 313022, 'tstamp': 313022}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 153895, 'error': None, 'target': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:47 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:47.791 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6d7d7c0-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:44:47 compute-0 nova_compute[117413]: 2025-10-08 16:44:47.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:47 compute-0 nova_compute[117413]: 2025-10-08 16:44:47.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:47 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:47.800 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6d7d7c0-f0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:44:47 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:47.801 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:44:47 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:47.801 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6d7d7c0-f0, col_values=(('external_ids', {'iface-id': 'a63b94a5-36df-4884-a4f9-6965418ea72c'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:44:47 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:47.802 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:44:47 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:47.803 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[a4ce3ddd-da47-4db1-9168-126b061ecf49]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:47 compute-0 nova_compute[117413]: 2025-10-08 16:44:47.837 2 INFO nova.virt.libvirt.driver [-] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Instance destroyed successfully.
Oct 08 16:44:47 compute-0 nova_compute[117413]: 2025-10-08 16:44:47.837 2 DEBUG nova.objects.instance [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lazy-loading 'resources' on Instance uuid 42a86e16-7b85-41fe-be3c-61a97043d11c obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.345 2 DEBUG nova.virt.libvirt.vif [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-08T16:43:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-229913144',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-229913144',id=31,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:44:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cc10ca4f587446c896aeb3ac8d6a1fea',ramdisk_id='',reservation_id='r-z66gwgih',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-1978933030',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-1978933030-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:44:03Z,user_data=None,user_id='7560d8247c7549c9a1a5774b411e593f',uuid=42a86e16-7b85-41fe-be3c-61a97043d11c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1fb61428-1e3c-45d8-83a4-4616134641ec", "address": "fa:16:3e:8d:5a:bb", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb61428-1e", "ovs_interfaceid": "1fb61428-1e3c-45d8-83a4-4616134641ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.346 2 DEBUG nova.network.os_vif_util [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Converting VIF {"id": "1fb61428-1e3c-45d8-83a4-4616134641ec", "address": "fa:16:3e:8d:5a:bb", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1fb61428-1e", "ovs_interfaceid": "1fb61428-1e3c-45d8-83a4-4616134641ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.347 2 DEBUG nova.network.os_vif_util [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:5a:bb,bridge_name='br-int',has_traffic_filtering=True,id=1fb61428-1e3c-45d8-83a4-4616134641ec,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fb61428-1e') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.347 2 DEBUG os_vif [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:5a:bb,bridge_name='br-int',has_traffic_filtering=True,id=1fb61428-1e3c-45d8-83a4-4616134641ec,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fb61428-1e') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.352 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1fb61428-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.359 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=909356e9-1260-4487-81ec-940fd093afdc) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.365 2 INFO os_vif [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:5a:bb,bridge_name='br-int',has_traffic_filtering=True,id=1fb61428-1e3c-45d8-83a4-4616134641ec,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1fb61428-1e')
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.366 2 INFO nova.virt.libvirt.driver [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Deleting instance files /var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c_del
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.367 2 INFO nova.virt.libvirt.driver [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Deletion of /var/lib/nova/instances/42a86e16-7b85-41fe-be3c-61a97043d11c_del complete
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.470 2 DEBUG nova.compute.manager [req-46c49347-c500-4072-92c7-c70ff4bc37a1 req-5312422f-259a-4a7a-8236-427818d21e8c c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Received event network-vif-unplugged-1fb61428-1e3c-45d8-83a4-4616134641ec external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.471 2 DEBUG oslo_concurrency.lockutils [req-46c49347-c500-4072-92c7-c70ff4bc37a1 req-5312422f-259a-4a7a-8236-427818d21e8c c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "42a86e16-7b85-41fe-be3c-61a97043d11c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.472 2 DEBUG oslo_concurrency.lockutils [req-46c49347-c500-4072-92c7-c70ff4bc37a1 req-5312422f-259a-4a7a-8236-427818d21e8c c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "42a86e16-7b85-41fe-be3c-61a97043d11c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.472 2 DEBUG oslo_concurrency.lockutils [req-46c49347-c500-4072-92c7-c70ff4bc37a1 req-5312422f-259a-4a7a-8236-427818d21e8c c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "42a86e16-7b85-41fe-be3c-61a97043d11c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.473 2 DEBUG nova.compute.manager [req-46c49347-c500-4072-92c7-c70ff4bc37a1 req-5312422f-259a-4a7a-8236-427818d21e8c c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] No waiting events found dispatching network-vif-unplugged-1fb61428-1e3c-45d8-83a4-4616134641ec pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.473 2 DEBUG nova.compute.manager [req-46c49347-c500-4072-92c7-c70ff4bc37a1 req-5312422f-259a-4a7a-8236-427818d21e8c c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Received event network-vif-unplugged-1fb61428-1e3c-45d8-83a4-4616134641ec for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.883 2 INFO nova.compute.manager [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Took 1.35 seconds to destroy the instance on the hypervisor.
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.884 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.884 2 DEBUG nova.compute.manager [-] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.885 2 DEBUG nova.network.neutron [-] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:44:48 compute-0 nova_compute[117413]: 2025-10-08 16:44:48.885 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:44:49 compute-0 nova_compute[117413]: 2025-10-08 16:44:49.344 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:44:49 compute-0 podman[153910]: 2025-10-08 16:44:49.514269662 +0000 UTC m=+0.096609715 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, version=9.6, managed_by=edpm_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 08 16:44:50 compute-0 nova_compute[117413]: 2025-10-08 16:44:50.121 2 DEBUG nova.network.neutron [-] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:44:50 compute-0 nova_compute[117413]: 2025-10-08 16:44:50.531 2 DEBUG nova.compute.manager [req-5df9d996-4317-4a01-9e1a-a59933ae6df9 req-7c08ea47-6cab-4a61-a72e-e75446de555e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Received event network-vif-unplugged-1fb61428-1e3c-45d8-83a4-4616134641ec external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:44:50 compute-0 nova_compute[117413]: 2025-10-08 16:44:50.532 2 DEBUG oslo_concurrency.lockutils [req-5df9d996-4317-4a01-9e1a-a59933ae6df9 req-7c08ea47-6cab-4a61-a72e-e75446de555e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "42a86e16-7b85-41fe-be3c-61a97043d11c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:44:50 compute-0 nova_compute[117413]: 2025-10-08 16:44:50.532 2 DEBUG oslo_concurrency.lockutils [req-5df9d996-4317-4a01-9e1a-a59933ae6df9 req-7c08ea47-6cab-4a61-a72e-e75446de555e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "42a86e16-7b85-41fe-be3c-61a97043d11c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:44:50 compute-0 nova_compute[117413]: 2025-10-08 16:44:50.533 2 DEBUG oslo_concurrency.lockutils [req-5df9d996-4317-4a01-9e1a-a59933ae6df9 req-7c08ea47-6cab-4a61-a72e-e75446de555e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "42a86e16-7b85-41fe-be3c-61a97043d11c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:44:50 compute-0 nova_compute[117413]: 2025-10-08 16:44:50.533 2 DEBUG nova.compute.manager [req-5df9d996-4317-4a01-9e1a-a59933ae6df9 req-7c08ea47-6cab-4a61-a72e-e75446de555e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] No waiting events found dispatching network-vif-unplugged-1fb61428-1e3c-45d8-83a4-4616134641ec pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:44:50 compute-0 nova_compute[117413]: 2025-10-08 16:44:50.534 2 DEBUG nova.compute.manager [req-5df9d996-4317-4a01-9e1a-a59933ae6df9 req-7c08ea47-6cab-4a61-a72e-e75446de555e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Received event network-vif-unplugged-1fb61428-1e3c-45d8-83a4-4616134641ec for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:44:50 compute-0 nova_compute[117413]: 2025-10-08 16:44:50.534 2 DEBUG nova.compute.manager [req-5df9d996-4317-4a01-9e1a-a59933ae6df9 req-7c08ea47-6cab-4a61-a72e-e75446de555e c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Received event network-vif-deleted-1fb61428-1e3c-45d8-83a4-4616134641ec external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:44:50 compute-0 nova_compute[117413]: 2025-10-08 16:44:50.629 2 INFO nova.compute.manager [-] [instance: 42a86e16-7b85-41fe-be3c-61a97043d11c] Took 1.74 seconds to deallocate network for instance.
Oct 08 16:44:51 compute-0 nova_compute[117413]: 2025-10-08 16:44:51.154 2 DEBUG oslo_concurrency.lockutils [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:44:51 compute-0 nova_compute[117413]: 2025-10-08 16:44:51.155 2 DEBUG oslo_concurrency.lockutils [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:44:51 compute-0 nova_compute[117413]: 2025-10-08 16:44:51.212 2 DEBUG nova.compute.provider_tree [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:44:51 compute-0 nova_compute[117413]: 2025-10-08 16:44:51.721 2 DEBUG nova.scheduler.client.report [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:44:51 compute-0 nova_compute[117413]: 2025-10-08 16:44:51.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:51 compute-0 nova_compute[117413]: 2025-10-08 16:44:51.946 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:44:51 compute-0 nova_compute[117413]: 2025-10-08 16:44:51.946 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Oct 08 16:44:52 compute-0 nova_compute[117413]: 2025-10-08 16:44:52.450 2 DEBUG oslo_concurrency.lockutils [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.295s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:44:52 compute-0 nova_compute[117413]: 2025-10-08 16:44:52.454 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Oct 08 16:44:52 compute-0 nova_compute[117413]: 2025-10-08 16:44:52.480 2 INFO nova.scheduler.client.report [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Deleted allocations for instance 42a86e16-7b85-41fe-be3c-61a97043d11c
Oct 08 16:44:53 compute-0 nova_compute[117413]: 2025-10-08 16:44:53.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:53 compute-0 nova_compute[117413]: 2025-10-08 16:44:53.511 2 DEBUG oslo_concurrency.lockutils [None req-013d784f-1125-41d3-8033-a76c6e957d5a 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "42a86e16-7b85-41fe-be3c-61a97043d11c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.513s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:44:54 compute-0 nova_compute[117413]: 2025-10-08 16:44:54.053 2 DEBUG oslo_concurrency.lockutils [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "655c2e8b-0116-4ce0-a0dd-f74c4d848039" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:44:54 compute-0 nova_compute[117413]: 2025-10-08 16:44:54.053 2 DEBUG oslo_concurrency.lockutils [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "655c2e8b-0116-4ce0-a0dd-f74c4d848039" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:44:54 compute-0 nova_compute[117413]: 2025-10-08 16:44:54.054 2 DEBUG oslo_concurrency.lockutils [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "655c2e8b-0116-4ce0-a0dd-f74c4d848039-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:44:54 compute-0 nova_compute[117413]: 2025-10-08 16:44:54.054 2 DEBUG oslo_concurrency.lockutils [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "655c2e8b-0116-4ce0-a0dd-f74c4d848039-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:44:54 compute-0 nova_compute[117413]: 2025-10-08 16:44:54.054 2 DEBUG oslo_concurrency.lockutils [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "655c2e8b-0116-4ce0-a0dd-f74c4d848039-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:44:54 compute-0 nova_compute[117413]: 2025-10-08 16:44:54.067 2 INFO nova.compute.manager [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Terminating instance
Oct 08 16:44:54 compute-0 nova_compute[117413]: 2025-10-08 16:44:54.585 2 DEBUG nova.compute.manager [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:44:54 compute-0 kernel: tap2fb37d2a-51 (unregistering): left promiscuous mode
Oct 08 16:44:54 compute-0 NetworkManager[1034]: <info>  [1759941894.6145] device (tap2fb37d2a-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:44:54 compute-0 nova_compute[117413]: 2025-10-08 16:44:54.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:54 compute-0 ovn_controller[19768]: 2025-10-08T16:44:54Z|00277|binding|INFO|Releasing lport 2fb37d2a-51a6-4921-9339-bb6623b76913 from this chassis (sb_readonly=0)
Oct 08 16:44:54 compute-0 ovn_controller[19768]: 2025-10-08T16:44:54Z|00278|binding|INFO|Setting lport 2fb37d2a-51a6-4921-9339-bb6623b76913 down in Southbound
Oct 08 16:44:54 compute-0 ovn_controller[19768]: 2025-10-08T16:44:54Z|00279|binding|INFO|Removing iface tap2fb37d2a-51 ovn-installed in OVS
Oct 08 16:44:54 compute-0 nova_compute[117413]: 2025-10-08 16:44:54.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:54 compute-0 nova_compute[117413]: 2025-10-08 16:44:54.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:54.681 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:73:6b 10.100.0.11'], port_security=['fa:16:3e:3d:73:6b 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '655c2e8b-0116-4ce0-a0dd-f74c4d848039', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc10ca4f587446c896aeb3ac8d6a1fea', 'neutron:revision_number': '15', 'neutron:security_group_ids': '748d2b7e-80c6-40c6-bf04-afd53bb7b30a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556febef-7d7e-4cc6-af5d-a844b7512e41, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=2fb37d2a-51a6-4921-9339-bb6623b76913) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:44:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:54.683 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 2fb37d2a-51a6-4921-9339-bb6623b76913 in datapath c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3 unbound from our chassis
Oct 08 16:44:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:54.684 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:44:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:54.685 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e82ecd68-6b36-4a1f-8cee-49822c950022]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:54.686 28633 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3 namespace which is not needed anymore
Oct 08 16:44:54 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Oct 08 16:44:54 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000001e.scope: Consumed 2.942s CPU time.
Oct 08 16:44:54 compute-0 systemd-machined[77548]: Machine qemu-24-instance-0000001e terminated.
Oct 08 16:44:54 compute-0 nova_compute[117413]: 2025-10-08 16:44:54.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:54 compute-0 neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3[153616]: [NOTICE]   (153620) : haproxy version is 3.0.5-8e879a5
Oct 08 16:44:54 compute-0 neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3[153616]: [NOTICE]   (153620) : path to executable is /usr/sbin/haproxy
Oct 08 16:44:54 compute-0 neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3[153616]: [WARNING]  (153620) : Exiting Master process...
Oct 08 16:44:54 compute-0 podman[153957]: 2025-10-08 16:44:54.816604579 +0000 UTC m=+0.038406154 container kill 30f7ac62d09a9fbaaafbb8d1f3099cb6d82abf5d3887e7f8f33df713896e12f4 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:44:54 compute-0 nova_compute[117413]: 2025-10-08 16:44:54.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:54 compute-0 neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3[153616]: [ALERT]    (153620) : Current worker (153622) exited with code 143 (Terminated)
Oct 08 16:44:54 compute-0 neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3[153616]: [WARNING]  (153620) : All workers exited. Exiting... (0)
Oct 08 16:44:54 compute-0 systemd[1]: libpod-30f7ac62d09a9fbaaafbb8d1f3099cb6d82abf5d3887e7f8f33df713896e12f4.scope: Deactivated successfully.
Oct 08 16:44:54 compute-0 nova_compute[117413]: 2025-10-08 16:44:54.860 2 INFO nova.virt.libvirt.driver [-] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Instance destroyed successfully.
Oct 08 16:44:54 compute-0 nova_compute[117413]: 2025-10-08 16:44:54.860 2 DEBUG nova.objects.instance [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lazy-loading 'resources' on Instance uuid 655c2e8b-0116-4ce0-a0dd-f74c4d848039 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:44:54 compute-0 podman[153981]: 2025-10-08 16:44:54.869107977 +0000 UTC m=+0.030752814 container died 30f7ac62d09a9fbaaafbb8d1f3099cb6d82abf5d3887e7f8f33df713896e12f4 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 08 16:44:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-30f7ac62d09a9fbaaafbb8d1f3099cb6d82abf5d3887e7f8f33df713896e12f4-userdata-shm.mount: Deactivated successfully.
Oct 08 16:44:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-be427bddb8601e8d4d587e48f59a146007e859734c10aef24b332c6818bb6eb7-merged.mount: Deactivated successfully.
Oct 08 16:44:54 compute-0 podman[153981]: 2025-10-08 16:44:54.92009447 +0000 UTC m=+0.081739287 container cleanup 30f7ac62d09a9fbaaafbb8d1f3099cb6d82abf5d3887e7f8f33df713896e12f4 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Oct 08 16:44:54 compute-0 systemd[1]: libpod-conmon-30f7ac62d09a9fbaaafbb8d1f3099cb6d82abf5d3887e7f8f33df713896e12f4.scope: Deactivated successfully.
Oct 08 16:44:54 compute-0 podman[153986]: 2025-10-08 16:44:54.940164707 +0000 UTC m=+0.091620572 container remove 30f7ac62d09a9fbaaafbb8d1f3099cb6d82abf5d3887e7f8f33df713896e12f4 (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Oct 08 16:44:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:54.950 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[70d1e24c-0912-46a1-8217-24f01dbf5af1]: (4, ("Wed Oct  8 04:44:54 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3 (30f7ac62d09a9fbaaafbb8d1f3099cb6d82abf5d3887e7f8f33df713896e12f4)\n30f7ac62d09a9fbaaafbb8d1f3099cb6d82abf5d3887e7f8f33df713896e12f4\nWed Oct  8 04:44:54 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3 (30f7ac62d09a9fbaaafbb8d1f3099cb6d82abf5d3887e7f8f33df713896e12f4)\n30f7ac62d09a9fbaaafbb8d1f3099cb6d82abf5d3887e7f8f33df713896e12f4\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:54.952 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[83ff3c07-0c88-42db-ba2d-e86c1336a060]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:54.953 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:44:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:54.953 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[15f0735b-df54-4645-9e24-820bf1c94d67]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:54.954 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6d7d7c0-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:44:54 compute-0 nova_compute[117413]: 2025-10-08 16:44:54.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:54 compute-0 kernel: tapc6d7d7c0-f0: left promiscuous mode
Oct 08 16:44:54 compute-0 nova_compute[117413]: 2025-10-08 16:44:54.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:54 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:54.989 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[edcff388-c680-4ffd-becf-4609398d2afa]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:55 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:55.027 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[06c8adf2-815f-4727-84a6-d08496b5843f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:55 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:55.028 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[ab19f3d3-68cb-4af2-b226-6e4966ff467b]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:55 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:55.056 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[fa099ec6-cf99-4baa-b122-b22e7ae6a59c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 312994, 'reachable_time': 19499, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 154027, 'error': None, 'target': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:55 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:55.059 28777 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 08 16:44:55 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:44:55.059 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[94a2ea8a-5193-4bb0-8e10-afb1a5ca2fb2]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:44:55 compute-0 systemd[1]: run-netns-ovnmeta\x2dc6d7d7c0\x2dfbc5\x2d4242\x2da987\x2dda6fff2b6bc3.mount: Deactivated successfully.
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.372 2 DEBUG nova.virt.libvirt.vif [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,compute_id=2,config_drive='True',created_at=2025-10-08T16:43:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-916982275',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-916982275',id=30,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:43:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cc10ca4f587446c896aeb3ac8d6a1fea',ramdisk_id='',reservation_id='r-3s1s8pl2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',clean_attempts='1',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-1978933030',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-1978933030-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:44:43Z,user_data=None,user_id='7560d8247c7549c9a1a5774b411e593f',uuid=655c2e8b-0116-4ce0-a0dd-f74c4d848039,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2fb37d2a-51a6-4921-9339-bb6623b76913", "address": "fa:16:3e:3d:73:6b", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb37d2a-51", "ovs_interfaceid": "2fb37d2a-51a6-4921-9339-bb6623b76913", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.373 2 DEBUG nova.network.os_vif_util [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Converting VIF {"id": "2fb37d2a-51a6-4921-9339-bb6623b76913", "address": "fa:16:3e:3d:73:6b", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb37d2a-51", "ovs_interfaceid": "2fb37d2a-51a6-4921-9339-bb6623b76913", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.374 2 DEBUG nova.network.os_vif_util [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3d:73:6b,bridge_name='br-int',has_traffic_filtering=True,id=2fb37d2a-51a6-4921-9339-bb6623b76913,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fb37d2a-51') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.374 2 DEBUG os_vif [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:73:6b,bridge_name='br-int',has_traffic_filtering=True,id=2fb37d2a-51a6-4921-9339-bb6623b76913,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fb37d2a-51') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fb37d2a-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.381 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=88044c0f-4d33-4b32-a31a-8033793539b0) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.385 2 INFO os_vif [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:73:6b,bridge_name='br-int',has_traffic_filtering=True,id=2fb37d2a-51a6-4921-9339-bb6623b76913,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fb37d2a-51')
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.386 2 INFO nova.virt.libvirt.driver [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Deleting instance files /var/lib/nova/instances/655c2e8b-0116-4ce0-a0dd-f74c4d848039_del
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.386 2 INFO nova.virt.libvirt.driver [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Deletion of /var/lib/nova/instances/655c2e8b-0116-4ce0-a0dd-f74c4d848039_del complete
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.472 2 DEBUG nova.compute.manager [req-a9ca7e9d-19ab-4427-8ba2-dfea4f61ab51 req-6c67a47e-197c-4940-b0b6-8db2e70b9bf9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Received event network-vif-unplugged-2fb37d2a-51a6-4921-9339-bb6623b76913 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.473 2 DEBUG oslo_concurrency.lockutils [req-a9ca7e9d-19ab-4427-8ba2-dfea4f61ab51 req-6c67a47e-197c-4940-b0b6-8db2e70b9bf9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "655c2e8b-0116-4ce0-a0dd-f74c4d848039-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.476 2 DEBUG oslo_concurrency.lockutils [req-a9ca7e9d-19ab-4427-8ba2-dfea4f61ab51 req-6c67a47e-197c-4940-b0b6-8db2e70b9bf9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "655c2e8b-0116-4ce0-a0dd-f74c4d848039-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.476 2 DEBUG oslo_concurrency.lockutils [req-a9ca7e9d-19ab-4427-8ba2-dfea4f61ab51 req-6c67a47e-197c-4940-b0b6-8db2e70b9bf9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "655c2e8b-0116-4ce0-a0dd-f74c4d848039-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.477 2 DEBUG nova.compute.manager [req-a9ca7e9d-19ab-4427-8ba2-dfea4f61ab51 req-6c67a47e-197c-4940-b0b6-8db2e70b9bf9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] No waiting events found dispatching network-vif-unplugged-2fb37d2a-51a6-4921-9339-bb6623b76913 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.478 2 DEBUG nova.compute.manager [req-a9ca7e9d-19ab-4427-8ba2-dfea4f61ab51 req-6c67a47e-197c-4940-b0b6-8db2e70b9bf9 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Received event network-vif-unplugged-2fb37d2a-51a6-4921-9339-bb6623b76913 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.898 2 INFO nova.compute.manager [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Took 1.31 seconds to destroy the instance on the hypervisor.
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.899 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.899 2 DEBUG nova.compute.manager [-] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.899 2 DEBUG nova.network.neutron [-] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:44:55 compute-0 nova_compute[117413]: 2025-10-08 16:44:55.900 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:44:56 compute-0 nova_compute[117413]: 2025-10-08 16:44:56.338 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:44:56 compute-0 podman[154028]: 2025-10-08 16:44:56.485743405 +0000 UTC m=+0.084137567 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:44:56 compute-0 nova_compute[117413]: 2025-10-08 16:44:56.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:44:57 compute-0 nova_compute[117413]: 2025-10-08 16:44:57.126 2 DEBUG nova.network.neutron [-] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:44:57 compute-0 podman[154047]: 2025-10-08 16:44:57.484800092 +0000 UTC m=+0.080419761 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 16:44:57 compute-0 nova_compute[117413]: 2025-10-08 16:44:57.639 2 INFO nova.compute.manager [-] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Took 1.74 seconds to deallocate network for instance.
Oct 08 16:44:57 compute-0 nova_compute[117413]: 2025-10-08 16:44:57.842 2 DEBUG nova.compute.manager [req-80c9e8b0-9e93-4b5b-9df0-8a11fe982689 req-3e3780bc-a37f-4731-9546-12be7eeed4b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Received event network-vif-unplugged-2fb37d2a-51a6-4921-9339-bb6623b76913 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:44:57 compute-0 nova_compute[117413]: 2025-10-08 16:44:57.843 2 DEBUG oslo_concurrency.lockutils [req-80c9e8b0-9e93-4b5b-9df0-8a11fe982689 req-3e3780bc-a37f-4731-9546-12be7eeed4b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "655c2e8b-0116-4ce0-a0dd-f74c4d848039-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:44:57 compute-0 nova_compute[117413]: 2025-10-08 16:44:57.843 2 DEBUG oslo_concurrency.lockutils [req-80c9e8b0-9e93-4b5b-9df0-8a11fe982689 req-3e3780bc-a37f-4731-9546-12be7eeed4b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "655c2e8b-0116-4ce0-a0dd-f74c4d848039-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:44:57 compute-0 nova_compute[117413]: 2025-10-08 16:44:57.843 2 DEBUG oslo_concurrency.lockutils [req-80c9e8b0-9e93-4b5b-9df0-8a11fe982689 req-3e3780bc-a37f-4731-9546-12be7eeed4b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "655c2e8b-0116-4ce0-a0dd-f74c4d848039-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:44:57 compute-0 nova_compute[117413]: 2025-10-08 16:44:57.843 2 DEBUG nova.compute.manager [req-80c9e8b0-9e93-4b5b-9df0-8a11fe982689 req-3e3780bc-a37f-4731-9546-12be7eeed4b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] No waiting events found dispatching network-vif-unplugged-2fb37d2a-51a6-4921-9339-bb6623b76913 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:44:57 compute-0 nova_compute[117413]: 2025-10-08 16:44:57.843 2 WARNING nova.compute.manager [req-80c9e8b0-9e93-4b5b-9df0-8a11fe982689 req-3e3780bc-a37f-4731-9546-12be7eeed4b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Received unexpected event network-vif-unplugged-2fb37d2a-51a6-4921-9339-bb6623b76913 for instance with vm_state deleted and task_state None.
Oct 08 16:44:57 compute-0 nova_compute[117413]: 2025-10-08 16:44:57.843 2 DEBUG nova.compute.manager [req-80c9e8b0-9e93-4b5b-9df0-8a11fe982689 req-3e3780bc-a37f-4731-9546-12be7eeed4b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 655c2e8b-0116-4ce0-a0dd-f74c4d848039] Received event network-vif-deleted-2fb37d2a-51a6-4921-9339-bb6623b76913 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:44:58 compute-0 nova_compute[117413]: 2025-10-08 16:44:58.162 2 DEBUG oslo_concurrency.lockutils [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:44:58 compute-0 nova_compute[117413]: 2025-10-08 16:44:58.163 2 DEBUG oslo_concurrency.lockutils [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:44:58 compute-0 nova_compute[117413]: 2025-10-08 16:44:58.173 2 DEBUG oslo_concurrency.lockutils [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.010s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:44:58 compute-0 nova_compute[117413]: 2025-10-08 16:44:58.259 2 INFO nova.scheduler.client.report [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Deleted allocations for instance 655c2e8b-0116-4ce0-a0dd-f74c4d848039
Oct 08 16:44:59 compute-0 nova_compute[117413]: 2025-10-08 16:44:59.307 2 DEBUG oslo_concurrency.lockutils [None req-bf422256-dcb9-4e28-b864-3bc23f14d83e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "655c2e8b-0116-4ce0-a0dd-f74c4d848039" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.253s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:44:59 compute-0 podman[127881]: time="2025-10-08T16:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:44:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:44:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3034 "" "Go-http-client/1.1"
Oct 08 16:45:00 compute-0 nova_compute[117413]: 2025-10-08 16:45:00.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:01 compute-0 openstack_network_exporter[130039]: ERROR   16:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:45:01 compute-0 openstack_network_exporter[130039]: ERROR   16:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:45:01 compute-0 openstack_network_exporter[130039]: ERROR   16:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:45:01 compute-0 openstack_network_exporter[130039]: ERROR   16:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:45:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:45:01 compute-0 openstack_network_exporter[130039]: ERROR   16:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:45:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:45:01 compute-0 nova_compute[117413]: 2025-10-08 16:45:01.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:02 compute-0 podman[154068]: 2025-10-08 16:45:02.459704387 +0000 UTC m=+0.064670218 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:45:02 compute-0 podman[154069]: 2025-10-08 16:45:02.551078921 +0000 UTC m=+0.145487199 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=watcher_latest, container_name=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 08 16:45:05 compute-0 nova_compute[117413]: 2025-10-08 16:45:05.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:06 compute-0 nova_compute[117413]: 2025-10-08 16:45:06.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:09 compute-0 nova_compute[117413]: 2025-10-08 16:45:09.601 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:45:10 compute-0 nova_compute[117413]: 2025-10-08 16:45:10.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:11 compute-0 nova_compute[117413]: 2025-10-08 16:45:11.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:14 compute-0 podman[154118]: 2025-10-08 16:45:14.499082968 +0000 UTC m=+0.095718770 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Oct 08 16:45:15 compute-0 nova_compute[117413]: 2025-10-08 16:45:15.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:16 compute-0 nova_compute[117413]: 2025-10-08 16:45:16.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:17 compute-0 nova_compute[117413]: 2025-10-08 16:45:17.873 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:45:18 compute-0 nova_compute[117413]: 2025-10-08 16:45:18.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:45:18 compute-0 nova_compute[117413]: 2025-10-08 16:45:18.876 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:45:18 compute-0 nova_compute[117413]: 2025-10-08 16:45:18.877 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:45:18 compute-0 nova_compute[117413]: 2025-10-08 16:45:18.878 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:45:18 compute-0 nova_compute[117413]: 2025-10-08 16:45:18.879 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:45:19 compute-0 nova_compute[117413]: 2025-10-08 16:45:19.011 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:45:19 compute-0 nova_compute[117413]: 2025-10-08 16:45:19.012 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:45:19 compute-0 nova_compute[117413]: 2025-10-08 16:45:19.029 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.018s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:45:19 compute-0 nova_compute[117413]: 2025-10-08 16:45:19.030 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6128MB free_disk=73.24956893920898GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:45:19 compute-0 nova_compute[117413]: 2025-10-08 16:45:19.030 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:45:19 compute-0 nova_compute[117413]: 2025-10-08 16:45:19.030 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:45:20 compute-0 nova_compute[117413]: 2025-10-08 16:45:20.244 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:45:20 compute-0 nova_compute[117413]: 2025-10-08 16:45:20.244 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:45:19 up 53 min,  0 user,  load average: 0.55, 0.28, 0.21\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:45:20 compute-0 nova_compute[117413]: 2025-10-08 16:45:20.283 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing inventories for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Oct 08 16:45:20 compute-0 nova_compute[117413]: 2025-10-08 16:45:20.350 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating ProviderTree inventory for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Oct 08 16:45:20 compute-0 nova_compute[117413]: 2025-10-08 16:45:20.350 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating inventory in ProviderTree for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 08 16:45:20 compute-0 nova_compute[117413]: 2025-10-08 16:45:20.361 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing aggregate associations for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Oct 08 16:45:20 compute-0 nova_compute[117413]: 2025-10-08 16:45:20.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:20 compute-0 nova_compute[117413]: 2025-10-08 16:45:20.395 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing trait associations for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8, traits: HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIRTIO_PACKED,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI2,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_SB16,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_ARCH_X86_64,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_SOUND_MODEL_ES1370,HW_ARCH_X86_64,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_VIRTIO_FS,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_TIS,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,HW_CPU_X86_MMX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_IGB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOUND_MODEL_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_SOUND_MODEL_AC97,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ADDRESS_SPACE_EMULATED,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOUND_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_CRB,HW_CPU_X86_SSE42,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_STORAGE_BUS_IDE _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Oct 08 16:45:20 compute-0 nova_compute[117413]: 2025-10-08 16:45:20.420 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:45:20 compute-0 podman[154140]: 2025-10-08 16:45:20.479614428 +0000 UTC m=+0.086200147 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, name=ubi9-minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 08 16:45:20 compute-0 nova_compute[117413]: 2025-10-08 16:45:20.928 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:45:21 compute-0 nova_compute[117413]: 2025-10-08 16:45:21.443 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:45:21 compute-0 nova_compute[117413]: 2025-10-08 16:45:21.444 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.413s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:45:21 compute-0 nova_compute[117413]: 2025-10-08 16:45:21.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:23 compute-0 nova_compute[117413]: 2025-10-08 16:45:23.902 2 DEBUG oslo_concurrency.lockutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "0e6d366d-93d8-4543-9f3a-bcc988af9498" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:45:23 compute-0 nova_compute[117413]: 2025-10-08 16:45:23.902 2 DEBUG oslo_concurrency.lockutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "0e6d366d-93d8-4543-9f3a-bcc988af9498" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:45:24 compute-0 nova_compute[117413]: 2025-10-08 16:45:24.410 2 DEBUG nova.compute.manager [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Starting instance... _do_build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2472
Oct 08 16:45:24 compute-0 nova_compute[117413]: 2025-10-08 16:45:24.439 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:45:24 compute-0 nova_compute[117413]: 2025-10-08 16:45:24.440 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:45:24 compute-0 nova_compute[117413]: 2025-10-08 16:45:24.441 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:45:24 compute-0 nova_compute[117413]: 2025-10-08 16:45:24.441 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:45:24 compute-0 nova_compute[117413]: 2025-10-08 16:45:24.957 2 DEBUG oslo_concurrency.lockutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:45:24 compute-0 nova_compute[117413]: 2025-10-08 16:45:24.958 2 DEBUG oslo_concurrency.lockutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:45:24 compute-0 nova_compute[117413]: 2025-10-08 16:45:24.968 2 DEBUG nova.virt.hardware [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.12/site-packages/nova/virt/hardware.py:2528
Oct 08 16:45:24 compute-0 nova_compute[117413]: 2025-10-08 16:45:24.968 2 INFO nova.compute.claims [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Claim successful on node compute-0.ctlplane.example.com
Oct 08 16:45:25 compute-0 nova_compute[117413]: 2025-10-08 16:45:25.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:45:25 compute-0 nova_compute[117413]: 2025-10-08 16:45:25.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:26 compute-0 nova_compute[117413]: 2025-10-08 16:45:26.022 2 DEBUG nova.compute.provider_tree [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:45:26 compute-0 nova_compute[117413]: 2025-10-08 16:45:26.531 2 DEBUG nova.scheduler.client.report [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:45:27 compute-0 nova_compute[117413]: 2025-10-08 16:45:27.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:27 compute-0 nova_compute[117413]: 2025-10-08 16:45:27.045 2 DEBUG oslo_concurrency.lockutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.087s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:45:27 compute-0 nova_compute[117413]: 2025-10-08 16:45:27.045 2 DEBUG nova.compute.manager [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2869
Oct 08 16:45:27 compute-0 nova_compute[117413]: 2025-10-08 16:45:27.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:45:27 compute-0 podman[154161]: 2025-10-08 16:45:27.510056665 +0000 UTC m=+0.106857710 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 08 16:45:27 compute-0 nova_compute[117413]: 2025-10-08 16:45:27.557 2 DEBUG nova.compute.manager [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2016
Oct 08 16:45:27 compute-0 nova_compute[117413]: 2025-10-08 16:45:27.558 2 DEBUG nova.network.neutron [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] allocate_for_instance() allocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1208
Oct 08 16:45:27 compute-0 nova_compute[117413]: 2025-10-08 16:45:27.558 2 WARNING neutronclient.v2_0.client [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:45:27 compute-0 nova_compute[117413]: 2025-10-08 16:45:27.559 2 WARNING neutronclient.v2_0.client [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:45:27 compute-0 podman[154182]: 2025-10-08 16:45:27.639263715 +0000 UTC m=+0.080007229 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 08 16:45:28 compute-0 nova_compute[117413]: 2025-10-08 16:45:28.068 2 INFO nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 08 16:45:28 compute-0 nova_compute[117413]: 2025-10-08 16:45:28.582 2 DEBUG nova.compute.manager [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Start building block device mappings for instance. _build_resources /usr/lib/python3.12/site-packages/nova/compute/manager.py:2904
Oct 08 16:45:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:29.404 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:29.407 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:45:29 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:29.408 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.529 2 DEBUG nova.network.neutron [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Successfully created port: 6072b3be-23f3-4c1d-98c1-4a8bd769e681 _create_port_minimal /usr/lib/python3.12/site-packages/nova/network/neutron.py:550
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.614 2 DEBUG nova.compute.manager [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:2678
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.615 2 DEBUG nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Creating instance directory _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5138
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.615 2 INFO nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Creating image(s)
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.616 2 DEBUG oslo_concurrency.lockutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "/var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.616 2 DEBUG oslo_concurrency.lockutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "/var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.616 2 DEBUG oslo_concurrency.lockutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "/var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.617 2 DEBUG oslo_utils.imageutils.format_inspector [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.619 2 DEBUG oslo_utils.imageutils.format_inspector [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.621 2 DEBUG oslo_concurrency.processutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.696 2 DEBUG oslo_concurrency.processutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.697 2 DEBUG oslo_concurrency.lockutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.699 2 DEBUG oslo_concurrency.lockutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.700 2 DEBUG oslo_utils.imageutils.format_inspector [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.707 2 DEBUG oslo_utils.imageutils.format_inspector [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.708 2 DEBUG oslo_concurrency.processutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:45:29 compute-0 podman[127881]: time="2025-10-08T16:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:45:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:45:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3040 "" "Go-http-client/1.1"
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.778 2 DEBUG oslo_concurrency.processutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.780 2 DEBUG oslo_concurrency.processutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.838 2 DEBUG oslo_concurrency.processutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498/disk 1073741824" returned: 0 in 0.059s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.840 2 DEBUG oslo_concurrency.lockutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.141s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.841 2 DEBUG oslo_concurrency.processutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.908 2 DEBUG oslo_concurrency.processutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.909 2 DEBUG nova.virt.disk.api [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Checking if we can resize image /var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.910 2 DEBUG oslo_concurrency.processutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.978 2 DEBUG oslo_concurrency.processutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.979 2 DEBUG nova.virt.disk.api [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Cannot resize image /var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.980 2 DEBUG nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Created local disks _create_image /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5270
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.981 2 DEBUG nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Ensure instance console log exists: /var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498/console.log _ensure_console_log_for_instance /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5017
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.982 2 DEBUG oslo_concurrency.lockutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.982 2 DEBUG oslo_concurrency.lockutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:45:29 compute-0 nova_compute[117413]: 2025-10-08 16:45:29.983 2 DEBUG oslo_concurrency.lockutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:45:30 compute-0 nova_compute[117413]: 2025-10-08 16:45:30.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:45:30 compute-0 nova_compute[117413]: 2025-10-08 16:45:30.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:30 compute-0 nova_compute[117413]: 2025-10-08 16:45:30.630 2 DEBUG nova.network.neutron [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Successfully updated port: 6072b3be-23f3-4c1d-98c1-4a8bd769e681 _update_port /usr/lib/python3.12/site-packages/nova/network/neutron.py:588
Oct 08 16:45:30 compute-0 nova_compute[117413]: 2025-10-08 16:45:30.677 2 DEBUG nova.compute.manager [req-ed288433-60fd-44a9-9398-1894618fddfa req-fb39cc4a-0fca-4958-bec6-285612c503c5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Received event network-changed-6072b3be-23f3-4c1d-98c1-4a8bd769e681 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:45:30 compute-0 nova_compute[117413]: 2025-10-08 16:45:30.677 2 DEBUG nova.compute.manager [req-ed288433-60fd-44a9-9398-1894618fddfa req-fb39cc4a-0fca-4958-bec6-285612c503c5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Refreshing instance network info cache due to event network-changed-6072b3be-23f3-4c1d-98c1-4a8bd769e681. external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11817
Oct 08 16:45:30 compute-0 nova_compute[117413]: 2025-10-08 16:45:30.678 2 DEBUG oslo_concurrency.lockutils [req-ed288433-60fd-44a9-9398-1894618fddfa req-fb39cc4a-0fca-4958-bec6-285612c503c5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-0e6d366d-93d8-4543-9f3a-bcc988af9498" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:45:30 compute-0 nova_compute[117413]: 2025-10-08 16:45:30.678 2 DEBUG oslo_concurrency.lockutils [req-ed288433-60fd-44a9-9398-1894618fddfa req-fb39cc4a-0fca-4958-bec6-285612c503c5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-0e6d366d-93d8-4543-9f3a-bcc988af9498" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:45:30 compute-0 nova_compute[117413]: 2025-10-08 16:45:30.678 2 DEBUG nova.network.neutron [req-ed288433-60fd-44a9-9398-1894618fddfa req-fb39cc4a-0fca-4958-bec6-285612c503c5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Refreshing network info cache for port 6072b3be-23f3-4c1d-98c1-4a8bd769e681 _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2067
Oct 08 16:45:31 compute-0 nova_compute[117413]: 2025-10-08 16:45:31.138 2 DEBUG oslo_concurrency.lockutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "refresh_cache-0e6d366d-93d8-4543-9f3a-bcc988af9498" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:45:31 compute-0 nova_compute[117413]: 2025-10-08 16:45:31.185 2 WARNING neutronclient.v2_0.client [req-ed288433-60fd-44a9-9398-1894618fddfa req-fb39cc4a-0fca-4958-bec6-285612c503c5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:45:31 compute-0 nova_compute[117413]: 2025-10-08 16:45:31.257 2 DEBUG nova.network.neutron [req-ed288433-60fd-44a9-9398-1894618fddfa req-fb39cc4a-0fca-4958-bec6-285612c503c5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:45:31 compute-0 nova_compute[117413]: 2025-10-08 16:45:31.397 2 DEBUG nova.network.neutron [req-ed288433-60fd-44a9-9398-1894618fddfa req-fb39cc4a-0fca-4958-bec6-285612c503c5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:45:31 compute-0 openstack_network_exporter[130039]: ERROR   16:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:45:31 compute-0 openstack_network_exporter[130039]: ERROR   16:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:45:31 compute-0 openstack_network_exporter[130039]: ERROR   16:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:45:31 compute-0 openstack_network_exporter[130039]: ERROR   16:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:45:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:45:31 compute-0 openstack_network_exporter[130039]: ERROR   16:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:45:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:45:31 compute-0 nova_compute[117413]: 2025-10-08 16:45:31.905 2 DEBUG oslo_concurrency.lockutils [req-ed288433-60fd-44a9-9398-1894618fddfa req-fb39cc4a-0fca-4958-bec6-285612c503c5 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-0e6d366d-93d8-4543-9f3a-bcc988af9498" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:45:31 compute-0 nova_compute[117413]: 2025-10-08 16:45:31.906 2 DEBUG oslo_concurrency.lockutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquired lock "refresh_cache-0e6d366d-93d8-4543-9f3a-bcc988af9498" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:45:31 compute-0 nova_compute[117413]: 2025-10-08 16:45:31.906 2 DEBUG nova.network.neutron [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:45:32 compute-0 nova_compute[117413]: 2025-10-08 16:45:32.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:32 compute-0 nova_compute[117413]: 2025-10-08 16:45:32.488 2 DEBUG nova.network.neutron [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.12/site-packages/nova/network/neutron.py:3383
Oct 08 16:45:32 compute-0 nova_compute[117413]: 2025-10-08 16:45:32.657 2 WARNING neutronclient.v2_0.client [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:45:32 compute-0 nova_compute[117413]: 2025-10-08 16:45:32.812 2 DEBUG nova.network.neutron [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Updating instance_info_cache with network_info: [{"id": "6072b3be-23f3-4c1d-98c1-4a8bd769e681", "address": "fa:16:3e:0f:37:45", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6072b3be-23", "ovs_interfaceid": "6072b3be-23f3-4c1d-98c1-4a8bd769e681", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.318 2 DEBUG oslo_concurrency.lockutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Releasing lock "refresh_cache-0e6d366d-93d8-4543-9f3a-bcc988af9498" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.319 2 DEBUG nova.compute.manager [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Instance network_info: |[{"id": "6072b3be-23f3-4c1d-98c1-4a8bd769e681", "address": "fa:16:3e:0f:37:45", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6072b3be-23", "ovs_interfaceid": "6072b3be-23f3-4c1d-98c1-4a8bd769e681", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.12/site-packages/nova/compute/manager.py:2031
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.321 2 DEBUG nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Start _get_guest_xml network_info=[{"id": "6072b3be-23f3-4c1d-98c1-4a8bd769e681", "address": "fa:16:3e:0f:37:45", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6072b3be-23", "ovs_interfaceid": "6072b3be-23f3-4c1d-98c1-4a8bd769e681", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'boot_index': 0, 'encrypted': False, 'encryption_secret_uuid': None, 'size': 0, 'guest_format': None, 'encryption_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': '44390e9d-4b05-4916-9ba9-97b19c79ef43'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None}share_info=None _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8046
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.326 2 WARNING nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.328 2 DEBUG nova.virt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] InstanceDriverMetadata: InstanceDriverMetadata(root_type='image', root_id='44390e9d-4b05-4916-9ba9-97b19c79ef43', instance_meta=NovaInstanceMeta(name='tempest-TestExecuteZoneMigrationStrategy-server-1973299131', uuid='0e6d366d-93d8-4543-9f3a-bcc988af9498'), owner=OwnerMeta(userid='7560d8247c7549c9a1a5774b411e593f', username='tempest-TestExecuteZoneMigrationStrategy-1978933030-project-admin', projectid='cc10ca4f587446c896aeb3ac8d6a1fea', projectname='tempest-TestExecuteZoneMigrationStrategy-1978933030'), image=ImageMeta(id='44390e9d-4b05-4916-9ba9-97b19c79ef43', name=None, container_format='bare', disk_format='qcow2', min_disk=1, min_ram=0, properties={'hw_rng_model': 'virtio'}), flavor=FlavorMeta(name='m1.nano', flavorid='43cd5d45-bd07-4889-a671-dd23291090c1', memory_mb=128, vcpus=1, root_gb=1, ephemeral_gb=0, extra_specs={'hw_rng:allowed': 'True'}, swap=0), network_info=[{"id": "6072b3be-23f3-4c1d-98c1-4a8bd769e681", "address": "fa:16:3e:0f:37:45", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6072b3be-23", "ovs_interfaceid": "6072b3be-23f3-4c1d-98c1-4a8bd769e681", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}], nova_package='32.1.0-0.20251008114656.23cad1d.el10', creation_time=1759941933.3281095) get_instance_driver_metadata /usr/lib/python3.12/site-packages/nova/virt/driver.py:437
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.332 2 DEBUG nova.virt.libvirt.host [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1698
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.333 2 DEBUG nova.virt.libvirt.host [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1708
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.338 2 DEBUG nova.virt.libvirt.host [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1717
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.338 2 DEBUG nova.virt.libvirt.host [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.12/site-packages/nova/virt/libvirt/host.py:1724
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.339 2 DEBUG nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:5809
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.339 2 DEBUG nova.virt.hardware [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-08T16:08:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='43cd5d45-bd07-4889-a671-dd23291090c1',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-08T16:09:01Z,direct_url=<?>,disk_format='qcow2',id=44390e9d-4b05-4916-9ba9-97b19c79ef43,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='b2eb43725f1e4dbfa51aeb475eac607e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-08T16:09:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:571
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.339 2 DEBUG nova.virt.hardware [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:356
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.339 2 DEBUG nova.virt.hardware [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:360
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.340 2 DEBUG nova.virt.hardware [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:396
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.340 2 DEBUG nova.virt.hardware [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:400
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.340 2 DEBUG nova.virt.hardware [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.12/site-packages/nova/virt/hardware.py:438
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.340 2 DEBUG nova.virt.hardware [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:577
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.340 2 DEBUG nova.virt.hardware [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:479
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.341 2 DEBUG nova.virt.hardware [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:509
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.341 2 DEBUG nova.virt.hardware [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:583
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.341 2 DEBUG nova.virt.hardware [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.12/site-packages/nova/virt/hardware.py:585
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.344 2 DEBUG nova.virt.libvirt.vif [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:45:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-1973299131',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-1973299131',id=33,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cc10ca4f587446c896aeb3ac8d6a1fea',ramdisk_id='',reservation_id='r-d8khp14h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-1978933030',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-1978933030-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:45:28Z,user_data=None,user_id='7560d8247c7549c9a1a5774b411e593f',uuid=0e6d366d-93d8-4543-9f3a-bcc988af9498,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6072b3be-23f3-4c1d-98c1-4a8bd769e681", "address": "fa:16:3e:0f:37:45", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6072b3be-23", "ovs_interfaceid": "6072b3be-23f3-4c1d-98c1-4a8bd769e681", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:574
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.344 2 DEBUG nova.network.os_vif_util [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Converting VIF {"id": "6072b3be-23f3-4c1d-98c1-4a8bd769e681", "address": "fa:16:3e:0f:37:45", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6072b3be-23", "ovs_interfaceid": "6072b3be-23f3-4c1d-98c1-4a8bd769e681", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.345 2 DEBUG nova.network.os_vif_util [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:37:45,bridge_name='br-int',has_traffic_filtering=True,id=6072b3be-23f3-4c1d-98c1-4a8bd769e681,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6072b3be-23') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.346 2 DEBUG nova.objects.instance [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lazy-loading 'pci_devices' on Instance uuid 0e6d366d-93d8-4543-9f3a-bcc988af9498 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:45:33 compute-0 podman[154215]: 2025-10-08 16:45:33.487354443 +0000 UTC m=+0.082303184 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:45:33 compute-0 podman[154216]: 2025-10-08 16:45:33.527339171 +0000 UTC m=+0.124728823 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.856 2 DEBUG nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] End _get_guest_xml xml=<domain type="kvm">
Oct 08 16:45:33 compute-0 nova_compute[117413]:   <uuid>0e6d366d-93d8-4543-9f3a-bcc988af9498</uuid>
Oct 08 16:45:33 compute-0 nova_compute[117413]:   <name>instance-00000021</name>
Oct 08 16:45:33 compute-0 nova_compute[117413]:   <memory>131072</memory>
Oct 08 16:45:33 compute-0 nova_compute[117413]:   <vcpu>1</vcpu>
Oct 08 16:45:33 compute-0 nova_compute[117413]:   <metadata>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <nova:package version="32.1.0-0.20251008114656.23cad1d.el10"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <nova:name>tempest-TestExecuteZoneMigrationStrategy-server-1973299131</nova:name>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <nova:creationTime>2025-10-08 16:45:33</nova:creationTime>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <nova:flavor name="m1.nano" id="43cd5d45-bd07-4889-a671-dd23291090c1">
Oct 08 16:45:33 compute-0 nova_compute[117413]:         <nova:memory>128</nova:memory>
Oct 08 16:45:33 compute-0 nova_compute[117413]:         <nova:disk>1</nova:disk>
Oct 08 16:45:33 compute-0 nova_compute[117413]:         <nova:swap>0</nova:swap>
Oct 08 16:45:33 compute-0 nova_compute[117413]:         <nova:ephemeral>0</nova:ephemeral>
Oct 08 16:45:33 compute-0 nova_compute[117413]:         <nova:vcpus>1</nova:vcpus>
Oct 08 16:45:33 compute-0 nova_compute[117413]:         <nova:extraSpecs>
Oct 08 16:45:33 compute-0 nova_compute[117413]:           <nova:extraSpec name="hw_rng:allowed">True</nova:extraSpec>
Oct 08 16:45:33 compute-0 nova_compute[117413]:         </nova:extraSpecs>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       </nova:flavor>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <nova:image uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43">
Oct 08 16:45:33 compute-0 nova_compute[117413]:         <nova:containerFormat>bare</nova:containerFormat>
Oct 08 16:45:33 compute-0 nova_compute[117413]:         <nova:diskFormat>qcow2</nova:diskFormat>
Oct 08 16:45:33 compute-0 nova_compute[117413]:         <nova:minDisk>1</nova:minDisk>
Oct 08 16:45:33 compute-0 nova_compute[117413]:         <nova:minRam>0</nova:minRam>
Oct 08 16:45:33 compute-0 nova_compute[117413]:         <nova:properties>
Oct 08 16:45:33 compute-0 nova_compute[117413]:           <nova:property name="hw_rng_model">virtio</nova:property>
Oct 08 16:45:33 compute-0 nova_compute[117413]:         </nova:properties>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       </nova:image>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <nova:owner>
Oct 08 16:45:33 compute-0 nova_compute[117413]:         <nova:user uuid="7560d8247c7549c9a1a5774b411e593f">tempest-TestExecuteZoneMigrationStrategy-1978933030-project-admin</nova:user>
Oct 08 16:45:33 compute-0 nova_compute[117413]:         <nova:project uuid="cc10ca4f587446c896aeb3ac8d6a1fea">tempest-TestExecuteZoneMigrationStrategy-1978933030</nova:project>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       </nova:owner>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <nova:root type="image" uuid="44390e9d-4b05-4916-9ba9-97b19c79ef43"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <nova:ports>
Oct 08 16:45:33 compute-0 nova_compute[117413]:         <nova:port uuid="6072b3be-23f3-4c1d-98c1-4a8bd769e681">
Oct 08 16:45:33 compute-0 nova_compute[117413]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:         </nova:port>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       </nova:ports>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     </nova:instance>
Oct 08 16:45:33 compute-0 nova_compute[117413]:   </metadata>
Oct 08 16:45:33 compute-0 nova_compute[117413]:   <sysinfo type="smbios">
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <system>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <entry name="manufacturer">RDO</entry>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <entry name="product">OpenStack Compute</entry>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <entry name="version">32.1.0-0.20251008114656.23cad1d.el10</entry>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <entry name="serial">0e6d366d-93d8-4543-9f3a-bcc988af9498</entry>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <entry name="uuid">0e6d366d-93d8-4543-9f3a-bcc988af9498</entry>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <entry name="family">Virtual Machine</entry>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     </system>
Oct 08 16:45:33 compute-0 nova_compute[117413]:   </sysinfo>
Oct 08 16:45:33 compute-0 nova_compute[117413]:   <os>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <boot dev="hd"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <smbios mode="sysinfo"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:   </os>
Oct 08 16:45:33 compute-0 nova_compute[117413]:   <features>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <acpi/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <apic/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <vmcoreinfo/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:   </features>
Oct 08 16:45:33 compute-0 nova_compute[117413]:   <clock offset="utc">
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <timer name="pit" tickpolicy="delay"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <timer name="hpet" present="no"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:   </clock>
Oct 08 16:45:33 compute-0 nova_compute[117413]:   <cpu mode="host-model" match="exact">
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <topology sockets="1" cores="1" threads="1"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:   </cpu>
Oct 08 16:45:33 compute-0 nova_compute[117413]:   <devices>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <disk type="file" device="disk">
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <driver name="qemu" type="qcow2" cache="none"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498/disk"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <target dev="vda" bus="virtio"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <disk type="file" device="cdrom">
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <driver name="qemu" type="raw" cache="none"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <source file="/var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498/disk.config"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <target dev="sda" bus="sata"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     </disk>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <interface type="ethernet">
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <mac address="fa:16:3e:0f:37:45"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <driver name="vhost" rx_queue_size="512"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <mtu size="1442"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <target dev="tap6072b3be-23"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     </interface>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <serial type="pty">
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <log file="/var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498/console.log" append="off"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     </serial>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <video>
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <model type="virtio"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     </video>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <input type="tablet" bus="usb"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <rng model="virtio">
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <backend model="random">/dev/urandom</backend>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     </rng>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="pci" model="pcie-root-port"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <controller type="usb" index="0"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     <memballoon model="virtio" autodeflate="on" freePageReporting="on">
Oct 08 16:45:33 compute-0 nova_compute[117413]:       <stats period="10"/>
Oct 08 16:45:33 compute-0 nova_compute[117413]:     </memballoon>
Oct 08 16:45:33 compute-0 nova_compute[117413]:   </devices>
Oct 08 16:45:33 compute-0 nova_compute[117413]: </domain>
Oct 08 16:45:33 compute-0 nova_compute[117413]:  _get_guest_xml /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:8052
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.857 2 DEBUG nova.compute.manager [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Preparing to wait for external event network-vif-plugged-6072b3be-23f3-4c1d-98c1-4a8bd769e681 prepare_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:307
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.857 2 DEBUG oslo_concurrency.lockutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "0e6d366d-93d8-4543-9f3a-bcc988af9498-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.858 2 DEBUG oslo_concurrency.lockutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "0e6d366d-93d8-4543-9f3a-bcc988af9498-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.858 2 DEBUG oslo_concurrency.lockutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "0e6d366d-93d8-4543-9f3a-bcc988af9498-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.859 2 DEBUG nova.virt.libvirt.vif [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='',created_at=2025-10-08T16:45:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-1973299131',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-1973299131',id=33,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cc10ca4f587446c896aeb3ac8d6a1fea',ramdisk_id='',reservation_id='r-d8khp14h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-1978933030',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-1978933030-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:45:28Z,user_data=None,user_id='7560d8247c7549c9a1a5774b411e593f',uuid=0e6d366d-93d8-4543-9f3a-bcc988af9498,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6072b3be-23f3-4c1d-98c1-4a8bd769e681", "address": "fa:16:3e:0f:37:45", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6072b3be-23", "ovs_interfaceid": "6072b3be-23f3-4c1d-98c1-4a8bd769e681", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.860 2 DEBUG nova.network.os_vif_util [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Converting VIF {"id": "6072b3be-23f3-4c1d-98c1-4a8bd769e681", "address": "fa:16:3e:0f:37:45", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6072b3be-23", "ovs_interfaceid": "6072b3be-23f3-4c1d-98c1-4a8bd769e681", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.861 2 DEBUG nova.network.os_vif_util [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:37:45,bridge_name='br-int',has_traffic_filtering=True,id=6072b3be-23f3-4c1d-98c1-4a8bd769e681,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6072b3be-23') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.861 2 DEBUG os_vif [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:37:45,bridge_name='br-int',has_traffic_filtering=True,id=6072b3be-23f3-4c1d-98c1-4a8bd769e681,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6072b3be-23') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.863 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.863 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.864 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'f66dc359-387c-5d01-a098-ad9b91a1f989', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.931 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6072b3be-23, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.932 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap6072b3be-23, col_values=(('qos', UUID('2ed897ce-058d-49e0-83c2-2b1cd5ed57e2')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.933 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap6072b3be-23, col_values=(('external_ids', {'iface-id': '6072b3be-23f3-4c1d-98c1-4a8bd769e681', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0f:37:45', 'vm-uuid': '0e6d366d-93d8-4543-9f3a-bcc988af9498'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:33 compute-0 NetworkManager[1034]: <info>  [1759941933.9366] manager: (tap6072b3be-23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:33 compute-0 nova_compute[117413]: 2025-10-08 16:45:33.942 2 INFO os_vif [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:37:45,bridge_name='br-int',has_traffic_filtering=True,id=6072b3be-23f3-4c1d-98c1-4a8bd769e681,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6072b3be-23')
Oct 08 16:45:35 compute-0 nova_compute[117413]: 2025-10-08 16:45:35.485 2 DEBUG nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:45:35 compute-0 nova_compute[117413]: 2025-10-08 16:45:35.486 2 DEBUG nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:13022
Oct 08 16:45:35 compute-0 nova_compute[117413]: 2025-10-08 16:45:35.486 2 DEBUG nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] No VIF found with MAC fa:16:3e:0f:37:45, not building metadata _build_interface_metadata /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:12998
Oct 08 16:45:35 compute-0 nova_compute[117413]: 2025-10-08 16:45:35.487 2 INFO nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Using config drive
Oct 08 16:45:35 compute-0 nova_compute[117413]: 2025-10-08 16:45:35.998 2 WARNING neutronclient.v2_0.client [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:45:36 compute-0 nova_compute[117413]: 2025-10-08 16:45:36.511 2 INFO nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Creating config drive at /var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498/disk.config
Oct 08 16:45:36 compute-0 nova_compute[117413]: 2025-10-08 16:45:36.516 2 DEBUG oslo_concurrency.processutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmpun5wmg8g execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:45:36 compute-0 nova_compute[117413]: 2025-10-08 16:45:36.647 2 DEBUG oslo_concurrency.processutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 32.1.0-0.20251008114656.23cad1d.el10 -quiet -J -r -V config-2 /tmp/tmpun5wmg8g" returned: 0 in 0.131s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:45:36 compute-0 kernel: tap6072b3be-23: entered promiscuous mode
Oct 08 16:45:36 compute-0 NetworkManager[1034]: <info>  [1759941936.7362] manager: (tap6072b3be-23): new Tun device (/org/freedesktop/NetworkManager/Devices/99)
Oct 08 16:45:36 compute-0 nova_compute[117413]: 2025-10-08 16:45:36.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:36 compute-0 ovn_controller[19768]: 2025-10-08T16:45:36Z|00280|binding|INFO|Claiming lport 6072b3be-23f3-4c1d-98c1-4a8bd769e681 for this chassis.
Oct 08 16:45:36 compute-0 ovn_controller[19768]: 2025-10-08T16:45:36Z|00281|binding|INFO|6072b3be-23f3-4c1d-98c1-4a8bd769e681: Claiming fa:16:3e:0f:37:45 10.100.0.6
Oct 08 16:45:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:36.747 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:37:45 10.100.0.6'], port_security=['fa:16:3e:0f:37:45 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0e6d366d-93d8-4543-9f3a-bcc988af9498', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc10ca4f587446c896aeb3ac8d6a1fea', 'neutron:revision_number': '4', 'neutron:security_group_ids': '748d2b7e-80c6-40c6-bf04-afd53bb7b30a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556febef-7d7e-4cc6-af5d-a844b7512e41, chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=6072b3be-23f3-4c1d-98c1-4a8bd769e681) old=Port_Binding(chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:45:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:36.747 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 6072b3be-23f3-4c1d-98c1-4a8bd769e681 in datapath c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3 bound to our chassis
Oct 08 16:45:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:36.748 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3
Oct 08 16:45:36 compute-0 ovn_controller[19768]: 2025-10-08T16:45:36Z|00282|binding|INFO|Setting lport 6072b3be-23f3-4c1d-98c1-4a8bd769e681 ovn-installed in OVS
Oct 08 16:45:36 compute-0 ovn_controller[19768]: 2025-10-08T16:45:36Z|00283|binding|INFO|Setting lport 6072b3be-23f3-4c1d-98c1-4a8bd769e681 up in Southbound
Oct 08 16:45:36 compute-0 systemd-udevd[154284]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:45:36 compute-0 nova_compute[117413]: 2025-10-08 16:45:36.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:36.768 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[63ee2f3b-71cc-4806-b45d-602932474796]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:45:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:36.770 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc6d7d7c0-f1 in ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3 namespace provision_datapath /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:794
Oct 08 16:45:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:36.772 139805 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc6d7d7c0-f0 not found in namespace None get_link_id /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:203
Oct 08 16:45:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:36.773 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[7d8999ee-e9fe-4146-a23a-94c9a011eb19]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:45:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:36.774 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[1727bd87-459a-4788-a52f-ebded33cee5e]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:45:36 compute-0 NetworkManager[1034]: <info>  [1759941936.7843] device (tap6072b3be-23): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:45:36 compute-0 NetworkManager[1034]: <info>  [1759941936.7862] device (tap6072b3be-23): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:45:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:36.791 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[cd8feffd-529f-4825-b602-445d5ca98cfb]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:45:36 compute-0 systemd-machined[77548]: New machine qemu-25-instance-00000021.
Oct 08 16:45:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:36.808 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[d4161197-8162-453f-8016-e6d3590c09f7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:45:36 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000021.
Oct 08 16:45:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:36.851 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[3c9122b2-357e-4f38-99ef-537e1fadb32f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:45:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:36.856 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3ba675-8131-4b65-87ef-5773aaec9eb6]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:45:36 compute-0 NetworkManager[1034]: <info>  [1759941936.8590] manager: (tapc6d7d7c0-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/100)
Oct 08 16:45:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:36.903 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[77cf8af6-f23e-4653-97c7-0b482e22b59a]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:45:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:36.906 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ddfa68-6ac8-4268-a742-d60a5e71db05]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:45:36 compute-0 NetworkManager[1034]: <info>  [1759941936.9409] device (tapc6d7d7c0-f0): carrier: link connected
Oct 08 16:45:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:36.951 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[139fe97b-a7a9-4802-8024-91ae5909e8b0]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:45:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:36.974 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[398c86d7-1b41-4671-b5a6-d5b122bed060]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6d7d7c0-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:a0:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 322563, 'reachable_time': 42864, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 154319, 'error': None, 'target': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:45:36 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:36.997 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[b5bc6b93-1317-4de8-82b3-e829a697ddd5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:a02e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 322563, 'tstamp': 322563}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 154320, 'error': None, 'target': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:37.020 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e03fda26-bd7b-4e78-8823-bad0cfb50d3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6d7d7c0-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:a0:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 322563, 'reachable_time': 42864, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 154321, 'error': None, 'target': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:45:37 compute-0 nova_compute[117413]: 2025-10-08 16:45:37.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:37.062 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[9f699c96-5070-467e-a360-bd6943d6ecaa]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:37.143 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[60ae4667-3703-4a3c-81dc-73995f47ae3d]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:37.149 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6d7d7c0-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:37.149 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:37.150 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6d7d7c0-f0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:45:37 compute-0 nova_compute[117413]: 2025-10-08 16:45:37.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:37 compute-0 kernel: tapc6d7d7c0-f0: entered promiscuous mode
Oct 08 16:45:37 compute-0 NetworkManager[1034]: <info>  [1759941937.1530] manager: (tapc6d7d7c0-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:37.156 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6d7d7c0-f0, col_values=(('external_ids', {'iface-id': 'a63b94a5-36df-4884-a4f9-6965418ea72c'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:45:37 compute-0 ovn_controller[19768]: 2025-10-08T16:45:37Z|00284|binding|INFO|Releasing lport a63b94a5-36df-4884-a4f9-6965418ea72c from this chassis (sb_readonly=0)
Oct 08 16:45:37 compute-0 nova_compute[117413]: 2025-10-08 16:45:37.159 2 DEBUG nova.compute.manager [req-8732d67b-f11a-464d-a817-2baf2dcb2d11 req-eacb41ee-2654-43d3-94fa-7f9af13a1bf1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Received event network-vif-plugged-6072b3be-23f3-4c1d-98c1-4a8bd769e681 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:45:37 compute-0 nova_compute[117413]: 2025-10-08 16:45:37.160 2 DEBUG oslo_concurrency.lockutils [req-8732d67b-f11a-464d-a817-2baf2dcb2d11 req-eacb41ee-2654-43d3-94fa-7f9af13a1bf1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "0e6d366d-93d8-4543-9f3a-bcc988af9498-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:45:37 compute-0 nova_compute[117413]: 2025-10-08 16:45:37.160 2 DEBUG oslo_concurrency.lockutils [req-8732d67b-f11a-464d-a817-2baf2dcb2d11 req-eacb41ee-2654-43d3-94fa-7f9af13a1bf1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0e6d366d-93d8-4543-9f3a-bcc988af9498-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:45:37 compute-0 nova_compute[117413]: 2025-10-08 16:45:37.160 2 DEBUG oslo_concurrency.lockutils [req-8732d67b-f11a-464d-a817-2baf2dcb2d11 req-eacb41ee-2654-43d3-94fa-7f9af13a1bf1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0e6d366d-93d8-4543-9f3a-bcc988af9498-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:45:37 compute-0 nova_compute[117413]: 2025-10-08 16:45:37.161 2 DEBUG nova.compute.manager [req-8732d67b-f11a-464d-a817-2baf2dcb2d11 req-eacb41ee-2654-43d3-94fa-7f9af13a1bf1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Processing event network-vif-plugged-6072b3be-23f3-4c1d-98c1-4a8bd769e681 _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11572
Oct 08 16:45:37 compute-0 nova_compute[117413]: 2025-10-08 16:45:37.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:37 compute-0 nova_compute[117413]: 2025-10-08 16:45:37.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:37.172 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[8ea81d0f-9eab-4b95-b191-9b052f78181e]: (4, '') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:37.173 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:37.174 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:37.174 28633 DEBUG neutron.agent.linux.external_process [-] No haproxy process started for c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3 disable /usr/lib/python3.12/site-packages/neutron/agent/linux/external_process.py:173
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:37.174 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:37.175 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[4df689e3-1cca-4de2-bd3a-f45d5ce002b7]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:37.175 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:37.176 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[347eb509-4746-44a4-b557-71af40e642dc]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:37.176 28633 DEBUG neutron.agent.metadata.driver_base [-] haproxy_cfg = 
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: global
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     log         /dev/log local0 debug
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     log-tag     haproxy-metadata-proxy-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     user        root
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     group       root
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     maxconn     1024
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     pidfile     /var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     daemon
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: defaults
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     log global
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     mode http
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     option httplog
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     option dontlognull
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     option http-server-close
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     option forwardfor
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     retries                 3
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     timeout http-request    30s
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     timeout connect         30s
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     timeout client          32s
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     timeout server          32s
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     timeout http-keep-alive 30s
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: listen listener
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     bind 169.254.169.254:80
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     server metadata /var/lib/neutron/metadata_proxy
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: 
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:     http-request add-header X-OVN-Network-ID c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]:  create_config_file /usr/lib/python3.12/site-packages/neutron/agent/metadata/driver_base.py:155
Oct 08 16:45:37 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:37.177 28633 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'env', 'PROCESS_TAG=haproxy-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.conf'] create_process /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:84
Oct 08 16:45:37 compute-0 podman[154353]: 2025-10-08 16:45:37.628256131 +0000 UTC m=+0.071770731 container create 16f953988eced64f11a323a0496ad2a0d399fab8a0465152407488aa4cb7256b (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3, tcib_build_tag=watcher_latest, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:45:37 compute-0 systemd[1]: Started libpod-conmon-16f953988eced64f11a323a0496ad2a0d399fab8a0465152407488aa4cb7256b.scope.
Oct 08 16:45:37 compute-0 podman[154353]: 2025-10-08 16:45:37.592934517 +0000 UTC m=+0.036449147 image pull 1b705be0a2473f9551d4f3571c1e8fc1b0bd84e013684239de53078e70a4b6e3 38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest
Oct 08 16:45:37 compute-0 systemd[1]: Started libcrun container.
Oct 08 16:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/badb507abf8c22e2d893d444814deea13eb40d5c8b2f28b3325fa75b6537f41f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 08 16:45:37 compute-0 podman[154353]: 2025-10-08 16:45:37.720745067 +0000 UTC m=+0.164259677 container init 16f953988eced64f11a323a0496ad2a0d399fab8a0465152407488aa4cb7256b (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:45:37 compute-0 podman[154353]: 2025-10-08 16:45:37.728182801 +0000 UTC m=+0.171697391 container start 16f953988eced64f11a323a0496ad2a0d399fab8a0465152407488aa4cb7256b (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:45:37 compute-0 neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3[154368]: [NOTICE]   (154378) : New worker (154380) forked
Oct 08 16:45:37 compute-0 neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3[154368]: [NOTICE]   (154378) : Loading success.
Oct 08 16:45:38 compute-0 nova_compute[117413]: 2025-10-08 16:45:38.296 2 DEBUG nova.compute.manager [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:602
Oct 08 16:45:38 compute-0 nova_compute[117413]: 2025-10-08 16:45:38.301 2 DEBUG nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Guest created on hypervisor spawn /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:4816
Oct 08 16:45:38 compute-0 nova_compute[117413]: 2025-10-08 16:45:38.305 2 INFO nova.virt.libvirt.driver [-] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Instance spawned successfully.
Oct 08 16:45:38 compute-0 nova_compute[117413]: 2025-10-08 16:45:38.306 2 DEBUG nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:985
Oct 08 16:45:38 compute-0 nova_compute[117413]: 2025-10-08 16:45:38.820 2 DEBUG nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:45:38 compute-0 nova_compute[117413]: 2025-10-08 16:45:38.821 2 DEBUG nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:45:38 compute-0 nova_compute[117413]: 2025-10-08 16:45:38.822 2 DEBUG nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:45:38 compute-0 nova_compute[117413]: 2025-10-08 16:45:38.822 2 DEBUG nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:45:38 compute-0 nova_compute[117413]: 2025-10-08 16:45:38.822 2 DEBUG nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:45:38 compute-0 nova_compute[117413]: 2025-10-08 16:45:38.823 2 DEBUG nova.virt.libvirt.driver [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:1014
Oct 08 16:45:38 compute-0 nova_compute[117413]: 2025-10-08 16:45:38.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:39 compute-0 nova_compute[117413]: 2025-10-08 16:45:39.237 2 DEBUG nova.compute.manager [req-99b4c49e-79cd-4d03-a826-733f737deb89 req-20886fc1-2b3f-4968-a97e-01423c0d8357 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Received event network-vif-plugged-6072b3be-23f3-4c1d-98c1-4a8bd769e681 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:45:39 compute-0 nova_compute[117413]: 2025-10-08 16:45:39.238 2 DEBUG oslo_concurrency.lockutils [req-99b4c49e-79cd-4d03-a826-733f737deb89 req-20886fc1-2b3f-4968-a97e-01423c0d8357 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "0e6d366d-93d8-4543-9f3a-bcc988af9498-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:45:39 compute-0 nova_compute[117413]: 2025-10-08 16:45:39.238 2 DEBUG oslo_concurrency.lockutils [req-99b4c49e-79cd-4d03-a826-733f737deb89 req-20886fc1-2b3f-4968-a97e-01423c0d8357 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0e6d366d-93d8-4543-9f3a-bcc988af9498-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:45:39 compute-0 nova_compute[117413]: 2025-10-08 16:45:39.238 2 DEBUG oslo_concurrency.lockutils [req-99b4c49e-79cd-4d03-a826-733f737deb89 req-20886fc1-2b3f-4968-a97e-01423c0d8357 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0e6d366d-93d8-4543-9f3a-bcc988af9498-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:45:39 compute-0 nova_compute[117413]: 2025-10-08 16:45:39.239 2 DEBUG nova.compute.manager [req-99b4c49e-79cd-4d03-a826-733f737deb89 req-20886fc1-2b3f-4968-a97e-01423c0d8357 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] No waiting events found dispatching network-vif-plugged-6072b3be-23f3-4c1d-98c1-4a8bd769e681 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:45:39 compute-0 nova_compute[117413]: 2025-10-08 16:45:39.239 2 WARNING nova.compute.manager [req-99b4c49e-79cd-4d03-a826-733f737deb89 req-20886fc1-2b3f-4968-a97e-01423c0d8357 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Received unexpected event network-vif-plugged-6072b3be-23f3-4c1d-98c1-4a8bd769e681 for instance with vm_state building and task_state spawning.
Oct 08 16:45:39 compute-0 nova_compute[117413]: 2025-10-08 16:45:39.332 2 INFO nova.compute.manager [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Took 9.72 seconds to spawn the instance on the hypervisor.
Oct 08 16:45:39 compute-0 nova_compute[117413]: 2025-10-08 16:45:39.333 2 DEBUG nova.compute.manager [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:45:39 compute-0 nova_compute[117413]: 2025-10-08 16:45:39.870 2 INFO nova.compute.manager [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Took 14.95 seconds to build instance.
Oct 08 16:45:40 compute-0 nova_compute[117413]: 2025-10-08 16:45:40.376 2 DEBUG oslo_concurrency.lockutils [None req-da87f41a-4715-4121-ae4d-98c652cb88d0 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "0e6d366d-93d8-4543-9f3a-bcc988af9498" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.473s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:45:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:41.942 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:45:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:41.942 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:45:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:45:41.943 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:45:42 compute-0 nova_compute[117413]: 2025-10-08 16:45:42.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:43 compute-0 nova_compute[117413]: 2025-10-08 16:45:43.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:45 compute-0 podman[154392]: 2025-10-08 16:45:45.458881303 +0000 UTC m=+0.069170897 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 08 16:45:47 compute-0 nova_compute[117413]: 2025-10-08 16:45:47.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:48 compute-0 nova_compute[117413]: 2025-10-08 16:45:48.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:50 compute-0 ovn_controller[19768]: 2025-10-08T16:45:50Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0f:37:45 10.100.0.6
Oct 08 16:45:50 compute-0 ovn_controller[19768]: 2025-10-08T16:45:50Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0f:37:45 10.100.0.6
Oct 08 16:45:51 compute-0 podman[154425]: 2025-10-08 16:45:51.467022398 +0000 UTC m=+0.078357141 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Oct 08 16:45:52 compute-0 nova_compute[117413]: 2025-10-08 16:45:52.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:52 compute-0 nova_compute[117413]: 2025-10-08 16:45:52.614 2 DEBUG nova.virt.libvirt.driver [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Creating tmpfile /var/lib/nova/instances/tmptvvlj0uj to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:10944
Oct 08 16:45:52 compute-0 nova_compute[117413]: 2025-10-08 16:45:52.616 2 WARNING neutronclient.v2_0.client [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:45:52 compute-0 nova_compute[117413]: 2025-10-08 16:45:52.634 2 DEBUG nova.compute.manager [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmptvvlj0uj',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst=<?>,serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.12/site-packages/nova/compute/manager.py:9086
Oct 08 16:45:53 compute-0 nova_compute[117413]: 2025-10-08 16:45:53.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:54 compute-0 nova_compute[117413]: 2025-10-08 16:45:54.697 2 WARNING neutronclient.v2_0.client [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:45:57 compute-0 nova_compute[117413]: 2025-10-08 16:45:57.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:58 compute-0 podman[154451]: 2025-10-08 16:45:58.478457178 +0000 UTC m=+0.068652532 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS)
Oct 08 16:45:58 compute-0 podman[154450]: 2025-10-08 16:45:58.494971462 +0000 UTC m=+0.092836957 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:45:58 compute-0 nova_compute[117413]: 2025-10-08 16:45:58.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:45:59 compute-0 nova_compute[117413]: 2025-10-08 16:45:59.291 2 DEBUG nova.compute.manager [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmptvvlj0uj',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='4adb8b2b-205b-48bc-b01a-3ebb1060faf1',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9311
Oct 08 16:45:59 compute-0 podman[127881]: time="2025-10-08T16:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:45:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:45:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3497 "" "Go-http-client/1.1"
Oct 08 16:46:00 compute-0 nova_compute[117413]: 2025-10-08 16:46:00.307 2 DEBUG oslo_concurrency.lockutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-4adb8b2b-205b-48bc-b01a-3ebb1060faf1" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:46:00 compute-0 nova_compute[117413]: 2025-10-08 16:46:00.308 2 DEBUG oslo_concurrency.lockutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-4adb8b2b-205b-48bc-b01a-3ebb1060faf1" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:46:00 compute-0 nova_compute[117413]: 2025-10-08 16:46:00.309 2 DEBUG nova.network.neutron [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:46:00 compute-0 nova_compute[117413]: 2025-10-08 16:46:00.824 2 WARNING neutronclient.v2_0.client [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:46:01 compute-0 openstack_network_exporter[130039]: ERROR   16:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:46:01 compute-0 openstack_network_exporter[130039]: ERROR   16:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:46:01 compute-0 openstack_network_exporter[130039]: ERROR   16:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:46:01 compute-0 openstack_network_exporter[130039]: ERROR   16:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:46:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:46:01 compute-0 openstack_network_exporter[130039]: ERROR   16:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:46:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:46:01 compute-0 nova_compute[117413]: 2025-10-08 16:46:01.705 2 WARNING neutronclient.v2_0.client [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:46:01 compute-0 nova_compute[117413]: 2025-10-08 16:46:01.887 2 DEBUG nova.network.neutron [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Updating instance_info_cache with network_info: [{"id": "4bc1dd29-8989-40af-9718-f66ae5c0f09b", "address": "fa:16:3e:3c:32:cb", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bc1dd29-89", "ovs_interfaceid": "4bc1dd29-8989-40af-9718-f66ae5c0f09b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:46:02 compute-0 nova_compute[117413]: 2025-10-08 16:46:02.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:02 compute-0 nova_compute[117413]: 2025-10-08 16:46:02.394 2 DEBUG oslo_concurrency.lockutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-4adb8b2b-205b-48bc-b01a-3ebb1060faf1" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:46:02 compute-0 nova_compute[117413]: 2025-10-08 16:46:02.414 2 DEBUG nova.virt.libvirt.driver [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmptvvlj0uj',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='4adb8b2b-205b-48bc-b01a-3ebb1060faf1',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=<?>,source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11737
Oct 08 16:46:02 compute-0 nova_compute[117413]: 2025-10-08 16:46:02.414 2 DEBUG nova.virt.libvirt.driver [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Creating instance directory: /var/lib/nova/instances/4adb8b2b-205b-48bc-b01a-3ebb1060faf1 pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11750
Oct 08 16:46:02 compute-0 nova_compute[117413]: 2025-10-08 16:46:02.415 2 DEBUG nova.virt.libvirt.driver [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Creating disk.info with the contents: {'/var/lib/nova/instances/4adb8b2b-205b-48bc-b01a-3ebb1060faf1/disk': 'qcow2', '/var/lib/nova/instances/4adb8b2b-205b-48bc-b01a-3ebb1060faf1/disk.config': 'raw'} pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11764
Oct 08 16:46:02 compute-0 nova_compute[117413]: 2025-10-08 16:46:02.415 2 DEBUG nova.virt.libvirt.driver [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11774
Oct 08 16:46:02 compute-0 nova_compute[117413]: 2025-10-08 16:46:02.415 2 DEBUG nova.objects.instance [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4adb8b2b-205b-48bc-b01a-3ebb1060faf1 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:46:02 compute-0 nova_compute[117413]: 2025-10-08 16:46:02.923 2 DEBUG oslo_utils.imageutils.format_inspector [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:46:02 compute-0 nova_compute[117413]: 2025-10-08 16:46:02.930 2 DEBUG oslo_utils.imageutils.format_inspector [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:46:02 compute-0 nova_compute[117413]: 2025-10-08 16:46:02.932 2 DEBUG oslo_concurrency.processutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.030 2 DEBUG oslo_concurrency.processutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.032 2 DEBUG oslo_concurrency.lockutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.032 2 DEBUG oslo_concurrency.lockutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.034 2 DEBUG oslo_utils.imageutils.format_inspector [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vmdk does not match, excluding from consideration (Signature KDMV not found: b'\xebH\x90\x00') _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.041 2 DEBUG oslo_utils.imageutils.format_inspector [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Format inspector for vhdx does not match, excluding from consideration (Region signature not found at 30000) _process_chunk /usr/lib/python3.12/site-packages/oslo_utils/imageutils/format_inspector.py:1365
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.042 2 DEBUG oslo_concurrency.processutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.136 2 DEBUG oslo_concurrency.processutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.137 2 DEBUG oslo_concurrency.processutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/4adb8b2b-205b-48bc-b01a-3ebb1060faf1/disk 1073741824 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.188 2 DEBUG oslo_concurrency.processutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61,backing_fmt=raw /var/lib/nova/instances/4adb8b2b-205b-48bc-b01a-3ebb1060faf1/disk 1073741824" returned: 0 in 0.051s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.190 2 DEBUG oslo_concurrency.lockutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.158s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.191 2 DEBUG oslo_concurrency.processutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.269 2 DEBUG oslo_concurrency.processutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb0a2b6359b6621d7d59a1b3a5d5693ec78dce61 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.271 2 DEBUG nova.virt.disk.api [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Checking if we can resize image /var/lib/nova/instances/4adb8b2b-205b-48bc-b01a-3ebb1060faf1/disk. size=1073741824 can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:164
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.272 2 DEBUG oslo_concurrency.processutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4adb8b2b-205b-48bc-b01a-3ebb1060faf1/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.344 2 DEBUG oslo_concurrency.processutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4adb8b2b-205b-48bc-b01a-3ebb1060faf1/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.346 2 DEBUG nova.virt.disk.api [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Cannot resize image /var/lib/nova/instances/4adb8b2b-205b-48bc-b01a-3ebb1060faf1/disk to a smaller size. can_resize_image /usr/lib/python3.12/site-packages/nova/virt/disk/api.py:170
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.347 2 DEBUG nova.objects.instance [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lazy-loading 'migration_context' on Instance uuid 4adb8b2b-205b-48bc-b01a-3ebb1060faf1 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.855 2 DEBUG nova.objects.base [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Object Instance<4adb8b2b-205b-48bc-b01a-3ebb1060faf1> lazy-loaded attributes: trusted_certs,migration_context wrapper /usr/lib/python3.12/site-packages/nova/objects/base.py:136
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.856 2 DEBUG oslo_concurrency.processutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/4adb8b2b-205b-48bc-b01a-3ebb1060faf1/disk.config 497664 execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.899 2 DEBUG oslo_concurrency.processutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/4adb8b2b-205b-48bc-b01a-3ebb1060faf1/disk.config 497664" returned: 0 in 0.044s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.900 2 DEBUG nova.virt.libvirt.driver [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11704
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.901 2 DEBUG nova.virt.libvirt.vif [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=1,config_drive='True',created_at=2025-10-08T16:45:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-1574760788',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-1574760788',id=32,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:45:18Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='cc10ca4f587446c896aeb3ac8d6a1fea',ramdisk_id='',reservation_id='r-b6dcuszf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-1978933030',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-1978933030-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-08T16:45:18Z,user_data=None,user_id='7560d8247c7549c9a1a5774b411e593f',uuid=4adb8b2b-205b-48bc-b01a-3ebb1060faf1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4bc1dd29-8989-40af-9718-f66ae5c0f09b", "address": "fa:16:3e:3c:32:cb", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap4bc1dd29-89", "ovs_interfaceid": "4bc1dd29-8989-40af-9718-f66ae5c0f09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:721
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.902 2 DEBUG nova.network.os_vif_util [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converting VIF {"id": "4bc1dd29-8989-40af-9718-f66ae5c0f09b", "address": "fa:16:3e:3c:32:cb", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap4bc1dd29-89", "ovs_interfaceid": "4bc1dd29-8989-40af-9718-f66ae5c0f09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.902 2 DEBUG nova.network.os_vif_util [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:32:cb,bridge_name='br-int',has_traffic_filtering=True,id=4bc1dd29-8989-40af-9718-f66ae5c0f09b,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bc1dd29-89') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.903 2 DEBUG os_vif [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:32:cb,bridge_name='br-int',has_traffic_filtering=True,id=4bc1dd29-8989-40af-9718-f66ae5c0f09b,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bc1dd29-89') plug /usr/lib/python3.12/site-packages/os_vif/__init__.py:76
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.904 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.904 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.905 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbCreateCommand(_result=None, table=QoS, columns={'type': 'linux-noop', 'external_ids': {'id': 'd1dd2936-66e2-5619-a891-45264db2ad7c', '_type': 'linux-noop'}}, row=False) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.910 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bc1dd29-89, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.910 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Port, record=tap4bc1dd29-89, col_values=(('qos', UUID('b6e3a239-af7d-4abe-964b-b85ff603be35')),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.911 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=2): DbSetCommand(_result=None, table=Interface, record=tap4bc1dd29-89, col_values=(('external_ids', {'iface-id': '4bc1dd29-8989-40af-9718-f66ae5c0f09b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3c:32:cb', 'vm-uuid': '4adb8b2b-205b-48bc-b01a-3ebb1060faf1'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:03 compute-0 NetworkManager[1034]: <info>  [1759941963.9128] manager: (tap4bc1dd29-89): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.923 2 INFO os_vif [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:32:cb,bridge_name='br-int',has_traffic_filtering=True,id=4bc1dd29-8989-40af-9718-f66ae5c0f09b,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bc1dd29-89')
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.924 2 DEBUG nova.virt.libvirt.driver [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.12/site-packages/nova/virt/libvirt/driver.py:11851
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.924 2 DEBUG nova.compute.manager [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmptvvlj0uj',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='4adb8b2b-205b-48bc-b01a-3ebb1060faf1',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9377
Oct 08 16:46:03 compute-0 nova_compute[117413]: 2025-10-08 16:46:03.926 2 WARNING neutronclient.v2_0.client [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:46:04 compute-0 nova_compute[117413]: 2025-10-08 16:46:04.391 2 WARNING neutronclient.v2_0.client [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:46:04 compute-0 podman[154507]: 2025-10-08 16:46:04.4611678 +0000 UTC m=+0.063886525 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:46:04 compute-0 podman[154508]: 2025-10-08 16:46:04.502319052 +0000 UTC m=+0.099551780 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 08 16:46:05 compute-0 nova_compute[117413]: 2025-10-08 16:46:05.590 2 DEBUG nova.network.neutron [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Port 4bc1dd29-8989-40af-9718-f66ae5c0f09b updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.12/site-packages/nova/network/neutron.py:356
Oct 08 16:46:05 compute-0 nova_compute[117413]: 2025-10-08 16:46:05.618 2 DEBUG nova.compute.manager [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_cpu_shared_set_info=set([]),dst_numa_info=<?>,dst_supports_mdev_live_migration=True,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmptvvlj0uj',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='4adb8b2b-205b-48bc-b01a-3ebb1060faf1',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},pci_dev_map_src_dst={},serial_listen_addr=None,serial_listen_ports=[],source_mdev_types=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,target_mdevs=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.12/site-packages/nova/compute/manager.py:9443
Oct 08 16:46:06 compute-0 ovn_controller[19768]: 2025-10-08T16:46:06Z|00285|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 08 16:46:07 compute-0 nova_compute[117413]: 2025-10-08 16:46:07.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:08 compute-0 kernel: tap4bc1dd29-89: entered promiscuous mode
Oct 08 16:46:08 compute-0 NetworkManager[1034]: <info>  [1759941968.6776] manager: (tap4bc1dd29-89): new Tun device (/org/freedesktop/NetworkManager/Devices/103)
Oct 08 16:46:08 compute-0 nova_compute[117413]: 2025-10-08 16:46:08.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:08 compute-0 ovn_controller[19768]: 2025-10-08T16:46:08Z|00286|binding|INFO|Claiming lport 4bc1dd29-8989-40af-9718-f66ae5c0f09b for this additional chassis.
Oct 08 16:46:08 compute-0 ovn_controller[19768]: 2025-10-08T16:46:08Z|00287|binding|INFO|4bc1dd29-8989-40af-9718-f66ae5c0f09b: Claiming fa:16:3e:3c:32:cb 10.100.0.10
Oct 08 16:46:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:08.688 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:32:cb 10.100.0.10'], port_security=['fa:16:3e:3c:32:cb 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4adb8b2b-205b-48bc-b01a-3ebb1060faf1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc10ca4f587446c896aeb3ac8d6a1fea', 'neutron:revision_number': '10', 'neutron:security_group_ids': '748d2b7e-80c6-40c6-bf04-afd53bb7b30a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556febef-7d7e-4cc6-af5d-a844b7512e41, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=4bc1dd29-8989-40af-9718-f66ae5c0f09b) old=Port_Binding(additional_chassis=[]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:46:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:08.690 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 4bc1dd29-8989-40af-9718-f66ae5c0f09b in datapath c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3 unbound from our chassis
Oct 08 16:46:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:08.692 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3
Oct 08 16:46:08 compute-0 ovn_controller[19768]: 2025-10-08T16:46:08Z|00288|binding|INFO|Setting lport 4bc1dd29-8989-40af-9718-f66ae5c0f09b ovn-installed in OVS
Oct 08 16:46:08 compute-0 nova_compute[117413]: 2025-10-08 16:46:08.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:08 compute-0 nova_compute[117413]: 2025-10-08 16:46:08.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:08.720 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[1b487fba-dc98-4bb6-bba0-ceb3b18c5c58]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:08 compute-0 systemd-udevd[154573]: Network interface NamePolicy= disabled on kernel command line.
Oct 08 16:46:08 compute-0 systemd-machined[77548]: New machine qemu-26-instance-00000020.
Oct 08 16:46:08 compute-0 NetworkManager[1034]: <info>  [1759941968.7424] device (tap4bc1dd29-89): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 08 16:46:08 compute-0 NetworkManager[1034]: <info>  [1759941968.7435] device (tap4bc1dd29-89): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 08 16:46:08 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-00000020.
Oct 08 16:46:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:08.754 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[59eda490-e155-4614-b8aa-860d06f794b8]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:08.758 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[d03ee121-c84c-441a-8658-c5c9165e7c9f]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:08.799 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[f14f52b9-e5f1-43ba-96e8-0bbf3873243e]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:08.820 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[99a7749d-73a0-4985-b503-1e0e90ec2e26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6d7d7c0-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:a0:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 322563, 'reachable_time': 42864, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 154587, 'error': None, 'target': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:08.847 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2dcf6b-3fd7-4c35-8809-abbdbe20c7fb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc6d7d7c0-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 322578, 'tstamp': 322578}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 154588, 'error': None, 'target': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc6d7d7c0-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 322582, 'tstamp': 322582}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 154588, 'error': None, 'target': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:08.849 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6d7d7c0-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:46:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:08.853 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6d7d7c0-f0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:46:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:08.853 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:46:08 compute-0 nova_compute[117413]: 2025-10-08 16:46:08.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:08.853 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6d7d7c0-f0, col_values=(('external_ids', {'iface-id': 'a63b94a5-36df-4884-a4f9-6965418ea72c'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:46:08 compute-0 nova_compute[117413]: 2025-10-08 16:46:08.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:08.854 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:46:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:08.856 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[6ce25dc7-0eb5-43d1-a6bf-81de4e2865a8]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:08 compute-0 nova_compute[117413]: 2025-10-08 16:46:08.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:11 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:11.152 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:46:11 compute-0 nova_compute[117413]: 2025-10-08 16:46:11.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:11 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:11.153 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:46:11 compute-0 ovn_controller[19768]: 2025-10-08T16:46:11Z|00289|binding|INFO|Claiming lport 4bc1dd29-8989-40af-9718-f66ae5c0f09b for this chassis.
Oct 08 16:46:11 compute-0 ovn_controller[19768]: 2025-10-08T16:46:11Z|00290|binding|INFO|4bc1dd29-8989-40af-9718-f66ae5c0f09b: Claiming fa:16:3e:3c:32:cb 10.100.0.10
Oct 08 16:46:11 compute-0 ovn_controller[19768]: 2025-10-08T16:46:11Z|00291|binding|INFO|Setting lport 4bc1dd29-8989-40af-9718-f66ae5c0f09b up in Southbound
Oct 08 16:46:12 compute-0 nova_compute[117413]: 2025-10-08 16:46:12.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:12 compute-0 nova_compute[117413]: 2025-10-08 16:46:12.585 2 INFO nova.compute.manager [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Post operation of migration started
Oct 08 16:46:12 compute-0 nova_compute[117413]: 2025-10-08 16:46:12.586 2 WARNING neutronclient.v2_0.client [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:46:13 compute-0 nova_compute[117413]: 2025-10-08 16:46:13.395 2 WARNING neutronclient.v2_0.client [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:46:13 compute-0 nova_compute[117413]: 2025-10-08 16:46:13.395 2 WARNING neutronclient.v2_0.client [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:46:13 compute-0 nova_compute[117413]: 2025-10-08 16:46:13.484 2 DEBUG oslo_concurrency.lockutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "refresh_cache-4adb8b2b-205b-48bc-b01a-3ebb1060faf1" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:313
Oct 08 16:46:13 compute-0 nova_compute[117413]: 2025-10-08 16:46:13.485 2 DEBUG oslo_concurrency.lockutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquired lock "refresh_cache-4adb8b2b-205b-48bc-b01a-3ebb1060faf1" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:316
Oct 08 16:46:13 compute-0 nova_compute[117413]: 2025-10-08 16:46:13.485 2 DEBUG nova.network.neutron [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:2070
Oct 08 16:46:13 compute-0 nova_compute[117413]: 2025-10-08 16:46:13.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:13 compute-0 nova_compute[117413]: 2025-10-08 16:46:13.992 2 WARNING neutronclient.v2_0.client [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:46:14 compute-0 nova_compute[117413]: 2025-10-08 16:46:14.661 2 WARNING neutronclient.v2_0.client [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:46:14 compute-0 nova_compute[117413]: 2025-10-08 16:46:14.843 2 DEBUG nova.network.neutron [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Updating instance_info_cache with network_info: [{"id": "4bc1dd29-8989-40af-9718-f66ae5c0f09b", "address": "fa:16:3e:3c:32:cb", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bc1dd29-89", "ovs_interfaceid": "4bc1dd29-8989-40af-9718-f66ae5c0f09b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:46:15 compute-0 nova_compute[117413]: 2025-10-08 16:46:15.351 2 DEBUG oslo_concurrency.lockutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Releasing lock "refresh_cache-4adb8b2b-205b-48bc-b01a-3ebb1060faf1" lock /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:334
Oct 08 16:46:15 compute-0 nova_compute[117413]: 2025-10-08 16:46:15.874 2 DEBUG oslo_concurrency.lockutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:46:15 compute-0 nova_compute[117413]: 2025-10-08 16:46:15.874 2 DEBUG oslo_concurrency.lockutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:46:15 compute-0 nova_compute[117413]: 2025-10-08 16:46:15.875 2 DEBUG oslo_concurrency.lockutils [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:46:15 compute-0 nova_compute[117413]: 2025-10-08 16:46:15.880 2 INFO nova.virt.libvirt.driver [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 08 16:46:15 compute-0 virtqemud[117740]: Domain id=26 name='instance-00000020' uuid=4adb8b2b-205b-48bc-b01a-3ebb1060faf1 is tainted: custom-monitor
Oct 08 16:46:16 compute-0 podman[154607]: 2025-10-08 16:46:16.476497437 +0000 UTC m=+0.076575260 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 16:46:16 compute-0 nova_compute[117413]: 2025-10-08 16:46:16.887 2 INFO nova.virt.libvirt.driver [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 08 16:46:17 compute-0 nova_compute[117413]: 2025-10-08 16:46:17.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:17 compute-0 nova_compute[117413]: 2025-10-08 16:46:17.895 2 INFO nova.virt.libvirt.driver [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 08 16:46:17 compute-0 nova_compute[117413]: 2025-10-08 16:46:17.902 2 DEBUG nova.compute.manager [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Checking state _get_power_state /usr/lib/python3.12/site-packages/nova/compute/manager.py:1825
Oct 08 16:46:18 compute-0 nova_compute[117413]: 2025-10-08 16:46:18.415 2 DEBUG nova.objects.instance [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Oct 08 16:46:18 compute-0 nova_compute[117413]: 2025-10-08 16:46:18.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:19 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:19.155 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:46:19 compute-0 nova_compute[117413]: 2025-10-08 16:46:19.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:46:19 compute-0 nova_compute[117413]: 2025-10-08 16:46:19.433 2 WARNING neutronclient.v2_0.client [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:46:20 compute-0 nova_compute[117413]: 2025-10-08 16:46:20.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:46:20 compute-0 nova_compute[117413]: 2025-10-08 16:46:20.412 2 WARNING neutronclient.v2_0.client [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:46:20 compute-0 nova_compute[117413]: 2025-10-08 16:46:20.413 2 WARNING neutronclient.v2_0.client [None req-7b2e6659-2684-4fb6-b3b3-d9bf497af018 ef6697ef954242e987d91b07e7d7f6d1 5e690a68a5624a97b59c058430039321 - - default default] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:46:20 compute-0 nova_compute[117413]: 2025-10-08 16:46:20.875 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:46:20 compute-0 nova_compute[117413]: 2025-10-08 16:46:20.875 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:46:20 compute-0 nova_compute[117413]: 2025-10-08 16:46:20.876 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:46:20 compute-0 nova_compute[117413]: 2025-10-08 16:46:20.876 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:46:21 compute-0 nova_compute[117413]: 2025-10-08 16:46:21.936 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:46:22 compute-0 nova_compute[117413]: 2025-10-08 16:46:22.005 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:46:22 compute-0 nova_compute[117413]: 2025-10-08 16:46:22.006 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:46:22 compute-0 nova_compute[117413]: 2025-10-08 16:46:22.071 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:46:22 compute-0 nova_compute[117413]: 2025-10-08 16:46:22.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:22 compute-0 nova_compute[117413]: 2025-10-08 16:46:22.080 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4adb8b2b-205b-48bc-b01a-3ebb1060faf1/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:46:22 compute-0 nova_compute[117413]: 2025-10-08 16:46:22.146 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4adb8b2b-205b-48bc-b01a-3ebb1060faf1/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:46:22 compute-0 nova_compute[117413]: 2025-10-08 16:46:22.148 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4adb8b2b-205b-48bc-b01a-3ebb1060faf1/disk --force-share --output=json execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:46:22 compute-0 nova_compute[117413]: 2025-10-08 16:46:22.211 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4adb8b2b-205b-48bc-b01a-3ebb1060faf1/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:46:22 compute-0 nova_compute[117413]: 2025-10-08 16:46:22.462 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:46:22 compute-0 nova_compute[117413]: 2025-10-08 16:46:22.463 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:46:22 compute-0 nova_compute[117413]: 2025-10-08 16:46:22.487 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.024s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:46:22 compute-0 nova_compute[117413]: 2025-10-08 16:46:22.487 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5785MB free_disk=73.1916389465332GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:46:22 compute-0 nova_compute[117413]: 2025-10-08 16:46:22.488 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:46:22 compute-0 nova_compute[117413]: 2025-10-08 16:46:22.488 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:46:22 compute-0 podman[154642]: 2025-10-08 16:46:22.521033664 +0000 UTC m=+0.117908046 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible)
Oct 08 16:46:23 compute-0 nova_compute[117413]: 2025-10-08 16:46:23.506 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Applying migration context for instance 4adb8b2b-205b-48bc-b01a-3ebb1060faf1 as it has an incoming, in-progress migration 554ca288-2552-4ebc-9d2d-6da6cedf175b. Migration status is running _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1046
Oct 08 16:46:23 compute-0 nova_compute[117413]: 2025-10-08 16:46:23.506 2 DEBUG nova.objects.instance [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.12/site-packages/nova/objects/instance.py:1067
Oct 08 16:46:23 compute-0 nova_compute[117413]: 2025-10-08 16:46:23.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.014 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1596
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.047 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance 0e6d366d-93d8-4543-9f3a-bcc988af9498 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.047 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Instance 4adb8b2b-205b-48bc-b01a-3ebb1060faf1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1740
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.048 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.048 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:46:22 up 54 min,  0 user,  load average: 0.51, 0.33, 0.23\n', 'num_instances': '2', 'num_vm_active': '2', 'num_task_None': '2', 'num_os_type_None': '2', 'num_proj_cc10ca4f587446c896aeb3ac8d6a1fea': '2', 'io_workload': '0'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.087 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.198 2 DEBUG oslo_concurrency.lockutils [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "0e6d366d-93d8-4543-9f3a-bcc988af9498" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.200 2 DEBUG oslo_concurrency.lockutils [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "0e6d366d-93d8-4543-9f3a-bcc988af9498" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.201 2 DEBUG oslo_concurrency.lockutils [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "0e6d366d-93d8-4543-9f3a-bcc988af9498-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.201 2 DEBUG oslo_concurrency.lockutils [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "0e6d366d-93d8-4543-9f3a-bcc988af9498-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.201 2 DEBUG oslo_concurrency.lockutils [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "0e6d366d-93d8-4543-9f3a-bcc988af9498-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.212 2 INFO nova.compute.manager [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Terminating instance
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.594 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.726 2 DEBUG nova.compute.manager [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:46:24 compute-0 kernel: tap6072b3be-23 (unregistering): left promiscuous mode
Oct 08 16:46:24 compute-0 NetworkManager[1034]: <info>  [1759941984.7559] device (tap6072b3be-23): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:46:24 compute-0 ovn_controller[19768]: 2025-10-08T16:46:24Z|00292|binding|INFO|Releasing lport 6072b3be-23f3-4c1d-98c1-4a8bd769e681 from this chassis (sb_readonly=0)
Oct 08 16:46:24 compute-0 ovn_controller[19768]: 2025-10-08T16:46:24Z|00293|binding|INFO|Setting lport 6072b3be-23f3-4c1d-98c1-4a8bd769e681 down in Southbound
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:24 compute-0 ovn_controller[19768]: 2025-10-08T16:46:24Z|00294|binding|INFO|Removing iface tap6072b3be-23 ovn-installed in OVS
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:24 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:24.780 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:37:45 10.100.0.6'], port_security=['fa:16:3e:0f:37:45 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0e6d366d-93d8-4543-9f3a-bcc988af9498', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc10ca4f587446c896aeb3ac8d6a1fea', 'neutron:revision_number': '5', 'neutron:security_group_ids': '748d2b7e-80c6-40c6-bf04-afd53bb7b30a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556febef-7d7e-4cc6-af5d-a844b7512e41, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=6072b3be-23f3-4c1d-98c1-4a8bd769e681) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:46:24 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:24.781 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 6072b3be-23f3-4c1d-98c1-4a8bd769e681 in datapath c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3 unbound from our chassis
Oct 08 16:46:24 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:24.783 28633 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3
Oct 08 16:46:24 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:24.804 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[9bde771a-0f73-4185-8aae-b1b8fcbcd31b]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:24 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000021.scope: Deactivated successfully.
Oct 08 16:46:24 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000021.scope: Consumed 14.642s CPU time.
Oct 08 16:46:24 compute-0 systemd-machined[77548]: Machine qemu-25-instance-00000021 terminated.
Oct 08 16:46:24 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:24.847 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[0e45dfad-6493-4c54-ad50-4e6056cd22d9]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:24 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:24.851 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[35d1fe04-4d30-4354-8ab0-8bfed1355124]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:24 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:24.887 141656 DEBUG oslo.privsep.daemon [-] privsep: reply[c9404e88-b3b2-4025-9479-3c0f897f7332]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:24 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:24.907 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[91dc0383-9d2b-4a85-86c4-5864b89657e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6d7d7c0-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:a0:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 8, 'rx_bytes': 1756, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 8, 'rx_bytes': 1756, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 322563, 'reachable_time': 42864, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 154676, 'error': None, 'target': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.922 2 DEBUG nova.compute.manager [req-b6f5601e-bbfd-4a7b-aae2-9374151c0b08 req-d544a308-caf0-46e5-aa57-845a16dc34b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Received event network-vif-unplugged-6072b3be-23f3-4c1d-98c1-4a8bd769e681 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.922 2 DEBUG oslo_concurrency.lockutils [req-b6f5601e-bbfd-4a7b-aae2-9374151c0b08 req-d544a308-caf0-46e5-aa57-845a16dc34b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "0e6d366d-93d8-4543-9f3a-bcc988af9498-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.923 2 DEBUG oslo_concurrency.lockutils [req-b6f5601e-bbfd-4a7b-aae2-9374151c0b08 req-d544a308-caf0-46e5-aa57-845a16dc34b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0e6d366d-93d8-4543-9f3a-bcc988af9498-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.923 2 DEBUG oslo_concurrency.lockutils [req-b6f5601e-bbfd-4a7b-aae2-9374151c0b08 req-d544a308-caf0-46e5-aa57-845a16dc34b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0e6d366d-93d8-4543-9f3a-bcc988af9498-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.923 2 DEBUG nova.compute.manager [req-b6f5601e-bbfd-4a7b-aae2-9374151c0b08 req-d544a308-caf0-46e5-aa57-845a16dc34b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] No waiting events found dispatching network-vif-unplugged-6072b3be-23f3-4c1d-98c1-4a8bd769e681 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.923 2 DEBUG nova.compute.manager [req-b6f5601e-bbfd-4a7b-aae2-9374151c0b08 req-d544a308-caf0-46e5-aa57-845a16dc34b4 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Received event network-vif-unplugged-6072b3be-23f3-4c1d-98c1-4a8bd769e681 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:46:24 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:24.927 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[45656be4-7f86-4645-9ea2-323da2f242be]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc6d7d7c0-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 322578, 'tstamp': 322578}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 154677, 'error': None, 'target': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc6d7d7c0-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 322582, 'tstamp': 322582}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 154677, 'error': None, 'target': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:24 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:24.928 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6d7d7c0-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:24 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:24.936 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6d7d7c0-f0, may_exist=True, interface_attrs={}) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:46:24 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:24.936 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:46:24 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:24.937 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6d7d7c0-f0, col_values=(('external_ids', {'iface-id': 'a63b94a5-36df-4884-a4f9-6965418ea72c'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:46:24 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:24.937 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 08 16:46:24 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:24.939 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[75b6b3a3-0a4e-4410-af8a-25be5300fb23]: (4, '\nglobal\n    log         /dev/log local0 debug\n    log-tag     haproxy-metadata-proxy-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3\n    user        root\n    group       root\n    maxconn     1024\n    pidfile     /var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy\n    daemon\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    option http-server-close\n    option forwardfor\n    retries                 3\n    timeout http-request    30s\n    timeout connect         30s\n    timeout client          32s\n    timeout server          32s\n    timeout http-keep-alive 30s\n\nlisten listener\n    bind 169.254.169.254:80\n    \n    server metadata /var/lib/neutron/metadata_proxy\n\n    http-request add-header X-OVN-Network-ID c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3\n') _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.998 2 INFO nova.virt.libvirt.driver [-] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Instance destroyed successfully.
Oct 08 16:46:24 compute-0 nova_compute[117413]: 2025-10-08 16:46:24.999 2 DEBUG nova.objects.instance [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lazy-loading 'resources' on Instance uuid 0e6d366d-93d8-4543-9f3a-bcc988af9498 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:46:25 compute-0 nova_compute[117413]: 2025-10-08 16:46:25.103 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:46:25 compute-0 nova_compute[117413]: 2025-10-08 16:46:25.104 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.616s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:46:25 compute-0 nova_compute[117413]: 2025-10-08 16:46:25.507 2 DEBUG nova.virt.libvirt.vif [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,compute_id=2,config_drive='True',created_at=2025-10-08T16:45:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-1973299131',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-1973299131',id=33,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:45:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cc10ca4f587446c896aeb3ac8d6a1fea',ramdisk_id='',reservation_id='r-d8khp14h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-1978933030',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-1978933030-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:45:39Z,user_data=None,user_id='7560d8247c7549c9a1a5774b411e593f',uuid=0e6d366d-93d8-4543-9f3a-bcc988af9498,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6072b3be-23f3-4c1d-98c1-4a8bd769e681", "address": "fa:16:3e:0f:37:45", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6072b3be-23", "ovs_interfaceid": "6072b3be-23f3-4c1d-98c1-4a8bd769e681", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:46:25 compute-0 nova_compute[117413]: 2025-10-08 16:46:25.508 2 DEBUG nova.network.os_vif_util [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Converting VIF {"id": "6072b3be-23f3-4c1d-98c1-4a8bd769e681", "address": "fa:16:3e:0f:37:45", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6072b3be-23", "ovs_interfaceid": "6072b3be-23f3-4c1d-98c1-4a8bd769e681", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:46:25 compute-0 nova_compute[117413]: 2025-10-08 16:46:25.509 2 DEBUG nova.network.os_vif_util [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:37:45,bridge_name='br-int',has_traffic_filtering=True,id=6072b3be-23f3-4c1d-98c1-4a8bd769e681,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6072b3be-23') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:46:25 compute-0 nova_compute[117413]: 2025-10-08 16:46:25.509 2 DEBUG os_vif [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:37:45,bridge_name='br-int',has_traffic_filtering=True,id=6072b3be-23f3-4c1d-98c1-4a8bd769e681,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6072b3be-23') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:46:25 compute-0 nova_compute[117413]: 2025-10-08 16:46:25.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:25 compute-0 nova_compute[117413]: 2025-10-08 16:46:25.512 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6072b3be-23, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:46:25 compute-0 nova_compute[117413]: 2025-10-08 16:46:25.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:25 compute-0 nova_compute[117413]: 2025-10-08 16:46:25.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:25 compute-0 nova_compute[117413]: 2025-10-08 16:46:25.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:25 compute-0 nova_compute[117413]: 2025-10-08 16:46:25.516 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=2ed897ce-058d-49e0-83c2-2b1cd5ed57e2) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:46:25 compute-0 nova_compute[117413]: 2025-10-08 16:46:25.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:25 compute-0 nova_compute[117413]: 2025-10-08 16:46:25.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:25 compute-0 nova_compute[117413]: 2025-10-08 16:46:25.519 2 INFO os_vif [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:37:45,bridge_name='br-int',has_traffic_filtering=True,id=6072b3be-23f3-4c1d-98c1-4a8bd769e681,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6072b3be-23')
Oct 08 16:46:25 compute-0 nova_compute[117413]: 2025-10-08 16:46:25.520 2 INFO nova.virt.libvirt.driver [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Deleting instance files /var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498_del
Oct 08 16:46:25 compute-0 nova_compute[117413]: 2025-10-08 16:46:25.521 2 INFO nova.virt.libvirt.driver [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Deletion of /var/lib/nova/instances/0e6d366d-93d8-4543-9f3a-bcc988af9498_del complete
Oct 08 16:46:26 compute-0 nova_compute[117413]: 2025-10-08 16:46:26.035 2 INFO nova.compute.manager [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Took 1.31 seconds to destroy the instance on the hypervisor.
Oct 08 16:46:26 compute-0 nova_compute[117413]: 2025-10-08 16:46:26.037 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:46:26 compute-0 nova_compute[117413]: 2025-10-08 16:46:26.037 2 DEBUG nova.compute.manager [-] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:46:26 compute-0 nova_compute[117413]: 2025-10-08 16:46:26.037 2 DEBUG nova.network.neutron [-] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:46:26 compute-0 nova_compute[117413]: 2025-10-08 16:46:26.038 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:46:26 compute-0 nova_compute[117413]: 2025-10-08 16:46:26.099 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:46:26 compute-0 nova_compute[117413]: 2025-10-08 16:46:26.099 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:46:26 compute-0 nova_compute[117413]: 2025-10-08 16:46:26.099 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:46:26 compute-0 nova_compute[117413]: 2025-10-08 16:46:26.100 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:46:26 compute-0 nova_compute[117413]: 2025-10-08 16:46:26.100 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:46:26 compute-0 nova_compute[117413]: 2025-10-08 16:46:26.394 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:46:26 compute-0 nova_compute[117413]: 2025-10-08 16:46:26.989 2 DEBUG nova.compute.manager [req-30d8a734-0d2f-4977-a24c-1ecb4d937580 req-40f8f41b-a80a-47fc-9573-785c1f1c5489 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Received event network-vif-unplugged-6072b3be-23f3-4c1d-98c1-4a8bd769e681 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:46:26 compute-0 nova_compute[117413]: 2025-10-08 16:46:26.990 2 DEBUG oslo_concurrency.lockutils [req-30d8a734-0d2f-4977-a24c-1ecb4d937580 req-40f8f41b-a80a-47fc-9573-785c1f1c5489 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "0e6d366d-93d8-4543-9f3a-bcc988af9498-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:46:26 compute-0 nova_compute[117413]: 2025-10-08 16:46:26.990 2 DEBUG oslo_concurrency.lockutils [req-30d8a734-0d2f-4977-a24c-1ecb4d937580 req-40f8f41b-a80a-47fc-9573-785c1f1c5489 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0e6d366d-93d8-4543-9f3a-bcc988af9498-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:46:26 compute-0 nova_compute[117413]: 2025-10-08 16:46:26.991 2 DEBUG oslo_concurrency.lockutils [req-30d8a734-0d2f-4977-a24c-1ecb4d937580 req-40f8f41b-a80a-47fc-9573-785c1f1c5489 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "0e6d366d-93d8-4543-9f3a-bcc988af9498-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:46:26 compute-0 nova_compute[117413]: 2025-10-08 16:46:26.991 2 DEBUG nova.compute.manager [req-30d8a734-0d2f-4977-a24c-1ecb4d937580 req-40f8f41b-a80a-47fc-9573-785c1f1c5489 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] No waiting events found dispatching network-vif-unplugged-6072b3be-23f3-4c1d-98c1-4a8bd769e681 pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:46:26 compute-0 nova_compute[117413]: 2025-10-08 16:46:26.991 2 DEBUG nova.compute.manager [req-30d8a734-0d2f-4977-a24c-1ecb4d937580 req-40f8f41b-a80a-47fc-9573-785c1f1c5489 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Received event network-vif-unplugged-6072b3be-23f3-4c1d-98c1-4a8bd769e681 for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:46:26 compute-0 nova_compute[117413]: 2025-10-08 16:46:26.992 2 DEBUG nova.compute.manager [req-30d8a734-0d2f-4977-a24c-1ecb4d937580 req-40f8f41b-a80a-47fc-9573-785c1f1c5489 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Received event network-vif-deleted-6072b3be-23f3-4c1d-98c1-4a8bd769e681 external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:46:26 compute-0 nova_compute[117413]: 2025-10-08 16:46:26.992 2 INFO nova.compute.manager [req-30d8a734-0d2f-4977-a24c-1ecb4d937580 req-40f8f41b-a80a-47fc-9573-785c1f1c5489 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Neutron deleted interface 6072b3be-23f3-4c1d-98c1-4a8bd769e681; detaching it from the instance and deleting it from the info cache
Oct 08 16:46:26 compute-0 nova_compute[117413]: 2025-10-08 16:46:26.992 2 DEBUG nova.network.neutron [req-30d8a734-0d2f-4977-a24c-1ecb4d937580 req-40f8f41b-a80a-47fc-9573-785c1f1c5489 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:46:27 compute-0 nova_compute[117413]: 2025-10-08 16:46:27.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:27 compute-0 nova_compute[117413]: 2025-10-08 16:46:27.157 2 DEBUG nova.network.neutron [-] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:46:27 compute-0 nova_compute[117413]: 2025-10-08 16:46:27.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:46:27 compute-0 nova_compute[117413]: 2025-10-08 16:46:27.504 2 DEBUG nova.compute.manager [req-30d8a734-0d2f-4977-a24c-1ecb4d937580 req-40f8f41b-a80a-47fc-9573-785c1f1c5489 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Detach interface failed, port_id=6072b3be-23f3-4c1d-98c1-4a8bd769e681, reason: Instance 0e6d366d-93d8-4543-9f3a-bcc988af9498 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11646
Oct 08 16:46:27 compute-0 nova_compute[117413]: 2025-10-08 16:46:27.665 2 INFO nova.compute.manager [-] [instance: 0e6d366d-93d8-4543-9f3a-bcc988af9498] Took 1.63 seconds to deallocate network for instance.
Oct 08 16:46:28 compute-0 nova_compute[117413]: 2025-10-08 16:46:28.190 2 DEBUG oslo_concurrency.lockutils [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:46:28 compute-0 nova_compute[117413]: 2025-10-08 16:46:28.191 2 DEBUG oslo_concurrency.lockutils [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:46:28 compute-0 nova_compute[117413]: 2025-10-08 16:46:28.270 2 DEBUG nova.compute.provider_tree [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:46:28 compute-0 nova_compute[117413]: 2025-10-08 16:46:28.798 2 DEBUG nova.scheduler.client.report [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:46:29 compute-0 nova_compute[117413]: 2025-10-08 16:46:29.344 2 DEBUG oslo_concurrency.lockutils [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.153s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:46:29 compute-0 nova_compute[117413]: 2025-10-08 16:46:29.459 2 INFO nova.scheduler.client.report [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Deleted allocations for instance 0e6d366d-93d8-4543-9f3a-bcc988af9498
Oct 08 16:46:29 compute-0 podman[154697]: 2025-10-08 16:46:29.470343441 +0000 UTC m=+0.067321114 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4)
Oct 08 16:46:29 compute-0 podman[154696]: 2025-10-08 16:46:29.471952547 +0000 UTC m=+0.068318533 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, config_id=iscsid, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Oct 08 16:46:29 compute-0 podman[127881]: time="2025-10-08T16:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:46:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20762 "" "Go-http-client/1.1"
Oct 08 16:46:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3500 "" "Go-http-client/1.1"
Oct 08 16:46:30 compute-0 nova_compute[117413]: 2025-10-08 16:46:30.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:46:30 compute-0 nova_compute[117413]: 2025-10-08 16:46:30.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:30 compute-0 nova_compute[117413]: 2025-10-08 16:46:30.663 2 DEBUG oslo_concurrency.lockutils [None req-f9566bce-e968-4c80-8cb7-241f6a13c5ad 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "0e6d366d-93d8-4543-9f3a-bcc988af9498" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.463s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:46:31 compute-0 openstack_network_exporter[130039]: ERROR   16:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:46:31 compute-0 openstack_network_exporter[130039]: ERROR   16:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:46:31 compute-0 openstack_network_exporter[130039]: ERROR   16:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:46:31 compute-0 openstack_network_exporter[130039]: ERROR   16:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:46:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:46:31 compute-0 openstack_network_exporter[130039]: ERROR   16:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:46:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:46:32 compute-0 nova_compute[117413]: 2025-10-08 16:46:32.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:32 compute-0 nova_compute[117413]: 2025-10-08 16:46:32.624 2 DEBUG oslo_concurrency.lockutils [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "4adb8b2b-205b-48bc-b01a-3ebb1060faf1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:46:32 compute-0 nova_compute[117413]: 2025-10-08 16:46:32.625 2 DEBUG oslo_concurrency.lockutils [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "4adb8b2b-205b-48bc-b01a-3ebb1060faf1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:46:32 compute-0 nova_compute[117413]: 2025-10-08 16:46:32.625 2 DEBUG oslo_concurrency.lockutils [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "4adb8b2b-205b-48bc-b01a-3ebb1060faf1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:46:32 compute-0 nova_compute[117413]: 2025-10-08 16:46:32.627 2 DEBUG oslo_concurrency.lockutils [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "4adb8b2b-205b-48bc-b01a-3ebb1060faf1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:46:32 compute-0 nova_compute[117413]: 2025-10-08 16:46:32.627 2 DEBUG oslo_concurrency.lockutils [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "4adb8b2b-205b-48bc-b01a-3ebb1060faf1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:46:32 compute-0 nova_compute[117413]: 2025-10-08 16:46:32.674 2 INFO nova.compute.manager [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Terminating instance
Oct 08 16:46:33 compute-0 nova_compute[117413]: 2025-10-08 16:46:33.245 2 DEBUG nova.compute.manager [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.12/site-packages/nova/compute/manager.py:3197
Oct 08 16:46:33 compute-0 kernel: tap4bc1dd29-89 (unregistering): left promiscuous mode
Oct 08 16:46:33 compute-0 NetworkManager[1034]: <info>  [1759941993.2936] device (tap4bc1dd29-89): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 08 16:46:33 compute-0 ovn_controller[19768]: 2025-10-08T16:46:33Z|00295|binding|INFO|Releasing lport 4bc1dd29-8989-40af-9718-f66ae5c0f09b from this chassis (sb_readonly=0)
Oct 08 16:46:33 compute-0 ovn_controller[19768]: 2025-10-08T16:46:33Z|00296|binding|INFO|Setting lport 4bc1dd29-8989-40af-9718-f66ae5c0f09b down in Southbound
Oct 08 16:46:33 compute-0 nova_compute[117413]: 2025-10-08 16:46:33.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:33 compute-0 ovn_controller[19768]: 2025-10-08T16:46:33Z|00297|binding|INFO|Removing iface tap4bc1dd29-89 ovn-installed in OVS
Oct 08 16:46:33 compute-0 nova_compute[117413]: 2025-10-08 16:46:33.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:33 compute-0 nova_compute[117413]: 2025-10-08 16:46:33.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:33.345 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:32:cb 10.100.0.10'], port_security=['fa:16:3e:3c:32:cb 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4adb8b2b-205b-48bc-b01a-3ebb1060faf1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc10ca4f587446c896aeb3ac8d6a1fea', 'neutron:revision_number': '15', 'neutron:security_group_ids': '748d2b7e-80c6-40c6-bf04-afd53bb7b30a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556febef-7d7e-4cc6-af5d-a844b7512e41, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>], logical_port=4bc1dd29-8989-40af-9718-f66ae5c0f09b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0b741d6630>]) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:46:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:33.347 28633 INFO neutron.agent.ovn.metadata.agent [-] Port 4bc1dd29-8989-40af-9718-f66ae5c0f09b in datapath c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3 unbound from our chassis
Oct 08 16:46:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:33.349 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:46:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:33.350 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb3145b-4a38-4cc5-84d2-06311deffdb3]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:33.351 28633 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3 namespace which is not needed anymore
Oct 08 16:46:33 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000020.scope: Deactivated successfully.
Oct 08 16:46:33 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000020.scope: Consumed 2.708s CPU time.
Oct 08 16:46:33 compute-0 systemd-machined[77548]: Machine qemu-26-instance-00000020 terminated.
Oct 08 16:46:33 compute-0 unix_chkpwd[154750]: password check failed for user (root)
Oct 08 16:46:33 compute-0 sshd-session[154736]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Oct 08 16:46:33 compute-0 nova_compute[117413]: 2025-10-08 16:46:33.523 2 DEBUG nova.compute.manager [req-9eff5ed5-1fb1-416d-84d8-42909669c7de req-c2b38050-548f-4fe5-bad3-0f1b965f2972 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Received event network-vif-unplugged-4bc1dd29-8989-40af-9718-f66ae5c0f09b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:46:33 compute-0 nova_compute[117413]: 2025-10-08 16:46:33.524 2 DEBUG oslo_concurrency.lockutils [req-9eff5ed5-1fb1-416d-84d8-42909669c7de req-c2b38050-548f-4fe5-bad3-0f1b965f2972 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "4adb8b2b-205b-48bc-b01a-3ebb1060faf1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:46:33 compute-0 nova_compute[117413]: 2025-10-08 16:46:33.525 2 DEBUG oslo_concurrency.lockutils [req-9eff5ed5-1fb1-416d-84d8-42909669c7de req-c2b38050-548f-4fe5-bad3-0f1b965f2972 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "4adb8b2b-205b-48bc-b01a-3ebb1060faf1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:46:33 compute-0 nova_compute[117413]: 2025-10-08 16:46:33.525 2 DEBUG oslo_concurrency.lockutils [req-9eff5ed5-1fb1-416d-84d8-42909669c7de req-c2b38050-548f-4fe5-bad3-0f1b965f2972 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "4adb8b2b-205b-48bc-b01a-3ebb1060faf1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:46:33 compute-0 nova_compute[117413]: 2025-10-08 16:46:33.525 2 DEBUG nova.compute.manager [req-9eff5ed5-1fb1-416d-84d8-42909669c7de req-c2b38050-548f-4fe5-bad3-0f1b965f2972 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] No waiting events found dispatching network-vif-unplugged-4bc1dd29-8989-40af-9718-f66ae5c0f09b pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:46:33 compute-0 nova_compute[117413]: 2025-10-08 16:46:33.525 2 DEBUG nova.compute.manager [req-9eff5ed5-1fb1-416d-84d8-42909669c7de req-c2b38050-548f-4fe5-bad3-0f1b965f2972 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Received event network-vif-unplugged-4bc1dd29-8989-40af-9718-f66ae5c0f09b for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:46:33 compute-0 nova_compute[117413]: 2025-10-08 16:46:33.529 2 INFO nova.virt.libvirt.driver [-] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Instance destroyed successfully.
Oct 08 16:46:33 compute-0 nova_compute[117413]: 2025-10-08 16:46:33.530 2 DEBUG nova.objects.instance [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lazy-loading 'resources' on Instance uuid 4adb8b2b-205b-48bc-b01a-3ebb1060faf1 obj_load_attr /usr/lib/python3.12/site-packages/nova/objects/instance.py:1141
Oct 08 16:46:33 compute-0 neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3[154368]: [NOTICE]   (154378) : haproxy version is 3.0.5-8e879a5
Oct 08 16:46:33 compute-0 neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3[154368]: [NOTICE]   (154378) : path to executable is /usr/sbin/haproxy
Oct 08 16:46:33 compute-0 neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3[154368]: [WARNING]  (154378) : Exiting Master process...
Oct 08 16:46:33 compute-0 podman[154768]: 2025-10-08 16:46:33.549586509 +0000 UTC m=+0.053923869 container kill 16f953988eced64f11a323a0496ad2a0d399fab8a0465152407488aa4cb7256b (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251007)
Oct 08 16:46:33 compute-0 neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3[154368]: [ALERT]    (154378) : Current worker (154380) exited with code 143 (Terminated)
Oct 08 16:46:33 compute-0 neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3[154368]: [WARNING]  (154378) : All workers exited. Exiting... (0)
Oct 08 16:46:33 compute-0 systemd[1]: libpod-16f953988eced64f11a323a0496ad2a0d399fab8a0465152407488aa4cb7256b.scope: Deactivated successfully.
Oct 08 16:46:33 compute-0 podman[154794]: 2025-10-08 16:46:33.623327137 +0000 UTC m=+0.042658226 container died 16f953988eced64f11a323a0496ad2a0d399fab8a0465152407488aa4cb7256b (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 08 16:46:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-16f953988eced64f11a323a0496ad2a0d399fab8a0465152407488aa4cb7256b-userdata-shm.mount: Deactivated successfully.
Oct 08 16:46:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-badb507abf8c22e2d893d444814deea13eb40d5c8b2f28b3325fa75b6537f41f-merged.mount: Deactivated successfully.
Oct 08 16:46:33 compute-0 podman[154794]: 2025-10-08 16:46:33.678548222 +0000 UTC m=+0.097879251 container cleanup 16f953988eced64f11a323a0496ad2a0d399fab8a0465152407488aa4cb7256b (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, org.label-schema.vendor=CentOS)
Oct 08 16:46:33 compute-0 systemd[1]: libpod-conmon-16f953988eced64f11a323a0496ad2a0d399fab8a0465152407488aa4cb7256b.scope: Deactivated successfully.
Oct 08 16:46:33 compute-0 podman[154796]: 2025-10-08 16:46:33.705940109 +0000 UTC m=+0.103739640 container remove 16f953988eced64f11a323a0496ad2a0d399fab8a0465152407488aa4cb7256b (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 08 16:46:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:33.714 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[14442139-ca70-48cc-9745-4926b8c23c0a]: (4, ("Wed Oct  8 04:46:33 PM UTC 2025 Sending signal '15' to neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3 (16f953988eced64f11a323a0496ad2a0d399fab8a0465152407488aa4cb7256b)\n16f953988eced64f11a323a0496ad2a0d399fab8a0465152407488aa4cb7256b\nWed Oct  8 04:46:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3 (16f953988eced64f11a323a0496ad2a0d399fab8a0465152407488aa4cb7256b)\n16f953988eced64f11a323a0496ad2a0d399fab8a0465152407488aa4cb7256b\n", '', 0)) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:33.717 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[15976c3a-a41e-4b68-ba48-32be2b2abf31]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:33.717 28633 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3.pid.haproxy' get_value_from_file /usr/lib/python3.12/site-packages/neutron/agent/linux/utils.py:268
Oct 08 16:46:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:33.718 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0e75a0-d663-4588-9bc6-cc8a46e12f55]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:33.719 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6d7d7c0-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:46:33 compute-0 nova_compute[117413]: 2025-10-08 16:46:33.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:33 compute-0 kernel: tapc6d7d7c0-f0: left promiscuous mode
Oct 08 16:46:33 compute-0 nova_compute[117413]: 2025-10-08 16:46:33.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:33.754 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5fd442-960e-4b29-ae30-f3465fb9b5d6]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:33.781 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[22e44ecb-4a6c-444f-8cb2-b4ece0609839]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:33.783 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ff1a5d-f76e-4400-934d-4c6ade3076c0]: (4, True) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:33.803 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[7e80ff4c-9dc6-41d5-a7ee-b248bef2266c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 322553, 'reachable_time': 27170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 154829, 'error': None, 'target': 'ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:33.806 28777 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3 deleted. remove_netns /usr/lib/python3.12/site-packages/neutron/privileged/agent/linux/ip_lib.py:603
Oct 08 16:46:33 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:33.806 28777 DEBUG oslo.privsep.daemon [-] privsep: reply[b773e820-688a-4573-9878-3561942f7577]: (4, None) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:46:33 compute-0 systemd[1]: run-netns-ovnmeta\x2dc6d7d7c0\x2dfbc5\x2d4242\x2da987\x2dda6fff2b6bc3.mount: Deactivated successfully.
Oct 08 16:46:34 compute-0 nova_compute[117413]: 2025-10-08 16:46:34.055 2 DEBUG nova.virt.libvirt.vif [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,compute_id=2,config_drive='True',created_at=2025-10-08T16:45:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-1574760788',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-1574760788',id=32,image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-08T16:45:18Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cc10ca4f587446c896aeb3ac8d6a1fea',ramdisk_id='',reservation_id='r-b6dcuszf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin,manager',clean_attempts='1',image_base_image_ref='44390e9d-4b05-4916-9ba9-97b19c79ef43',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-1978933030',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-1978933030-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-08T16:46:18Z,user_data=None,user_id='7560d8247c7549c9a1a5774b411e593f',uuid=4adb8b2b-205b-48bc-b01a-3ebb1060faf1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4bc1dd29-8989-40af-9718-f66ae5c0f09b", "address": "fa:16:3e:3c:32:cb", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bc1dd29-89", "ovs_interfaceid": "4bc1dd29-8989-40af-9718-f66ae5c0f09b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.12/site-packages/nova/virt/libvirt/vif.py:839
Oct 08 16:46:34 compute-0 nova_compute[117413]: 2025-10-08 16:46:34.057 2 DEBUG nova.network.os_vif_util [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Converting VIF {"id": "4bc1dd29-8989-40af-9718-f66ae5c0f09b", "address": "fa:16:3e:3c:32:cb", "network": {"id": "c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-2104609851-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e4f7c384fd8a490f85ee6827269829c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bc1dd29-89", "ovs_interfaceid": "4bc1dd29-8989-40af-9718-f66ae5c0f09b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:511
Oct 08 16:46:34 compute-0 nova_compute[117413]: 2025-10-08 16:46:34.058 2 DEBUG nova.network.os_vif_util [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3c:32:cb,bridge_name='br-int',has_traffic_filtering=True,id=4bc1dd29-8989-40af-9718-f66ae5c0f09b,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bc1dd29-89') nova_to_osvif_vif /usr/lib/python3.12/site-packages/nova/network/os_vif_util.py:548
Oct 08 16:46:34 compute-0 nova_compute[117413]: 2025-10-08 16:46:34.058 2 DEBUG os_vif [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:32:cb,bridge_name='br-int',has_traffic_filtering=True,id=4bc1dd29-8989-40af-9718-f66ae5c0f09b,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bc1dd29-89') unplug /usr/lib/python3.12/site-packages/os_vif/__init__.py:109
Oct 08 16:46:34 compute-0 nova_compute[117413]: 2025-10-08 16:46:34.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:34 compute-0 nova_compute[117413]: 2025-10-08 16:46:34.060 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bc1dd29-89, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:46:34 compute-0 nova_compute[117413]: 2025-10-08 16:46:34.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:34 compute-0 nova_compute[117413]: 2025-10-08 16:46:34.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:248
Oct 08 16:46:34 compute-0 nova_compute[117413]: 2025-10-08 16:46:34.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:34 compute-0 nova_compute[117413]: 2025-10-08 16:46:34.066 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbDestroyCommand(_result=None, table=QoS, record=b6e3a239-af7d-4abe-964b-b85ff603be35) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:46:34 compute-0 nova_compute[117413]: 2025-10-08 16:46:34.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:34 compute-0 nova_compute[117413]: 2025-10-08 16:46:34.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:34 compute-0 nova_compute[117413]: 2025-10-08 16:46:34.071 2 INFO os_vif [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:32:cb,bridge_name='br-int',has_traffic_filtering=True,id=4bc1dd29-8989-40af-9718-f66ae5c0f09b,network=Network(c6d7d7c0-fbc5-4242-a987-da6fff2b6bc3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bc1dd29-89')
Oct 08 16:46:34 compute-0 nova_compute[117413]: 2025-10-08 16:46:34.071 2 INFO nova.virt.libvirt.driver [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Deleting instance files /var/lib/nova/instances/4adb8b2b-205b-48bc-b01a-3ebb1060faf1_del
Oct 08 16:46:34 compute-0 nova_compute[117413]: 2025-10-08 16:46:34.072 2 INFO nova.virt.libvirt.driver [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Deletion of /var/lib/nova/instances/4adb8b2b-205b-48bc-b01a-3ebb1060faf1_del complete
Oct 08 16:46:34 compute-0 nova_compute[117413]: 2025-10-08 16:46:34.593 2 INFO nova.compute.manager [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Took 1.35 seconds to destroy the instance on the hypervisor.
Oct 08 16:46:34 compute-0 nova_compute[117413]: 2025-10-08 16:46:34.594 2 DEBUG oslo.service.backend._eventlet.loopingcall [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.12/site-packages/oslo_service/backend/_eventlet/loopingcall.py:437
Oct 08 16:46:34 compute-0 nova_compute[117413]: 2025-10-08 16:46:34.595 2 DEBUG nova.compute.manager [-] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Deallocating network for instance _deallocate_network /usr/lib/python3.12/site-packages/nova/compute/manager.py:2324
Oct 08 16:46:34 compute-0 nova_compute[117413]: 2025-10-08 16:46:34.595 2 DEBUG nova.network.neutron [-] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.12/site-packages/nova/network/neutron.py:1863
Oct 08 16:46:34 compute-0 nova_compute[117413]: 2025-10-08 16:46:34.595 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:46:35 compute-0 sshd-session[154736]: Failed password for root from 193.46.255.159 port 33772 ssh2
Oct 08 16:46:35 compute-0 nova_compute[117413]: 2025-10-08 16:46:35.393 2 WARNING neutronclient.v2_0.client [-] The python binding code in neutronclient is deprecated in favor of OpenstackSDK, please use that as this will be removed in a future release.
Oct 08 16:46:35 compute-0 podman[154830]: 2025-10-08 16:46:35.475369704 +0000 UTC m=+0.077557688 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 08 16:46:35 compute-0 podman[154831]: 2025-10-08 16:46:35.555293099 +0000 UTC m=+0.141842664 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.4)
Oct 08 16:46:35 compute-0 nova_compute[117413]: 2025-10-08 16:46:35.581 2 DEBUG nova.compute.manager [req-e18f9121-1361-484d-a3d5-cb0047aa534f req-81e36676-7c6a-4890-ab69-e325424a63ab c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Received event network-vif-unplugged-4bc1dd29-8989-40af-9718-f66ae5c0f09b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:46:35 compute-0 nova_compute[117413]: 2025-10-08 16:46:35.582 2 DEBUG oslo_concurrency.lockutils [req-e18f9121-1361-484d-a3d5-cb0047aa534f req-81e36676-7c6a-4890-ab69-e325424a63ab c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Acquiring lock "4adb8b2b-205b-48bc-b01a-3ebb1060faf1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:46:35 compute-0 nova_compute[117413]: 2025-10-08 16:46:35.582 2 DEBUG oslo_concurrency.lockutils [req-e18f9121-1361-484d-a3d5-cb0047aa534f req-81e36676-7c6a-4890-ab69-e325424a63ab c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "4adb8b2b-205b-48bc-b01a-3ebb1060faf1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:46:35 compute-0 nova_compute[117413]: 2025-10-08 16:46:35.582 2 DEBUG oslo_concurrency.lockutils [req-e18f9121-1361-484d-a3d5-cb0047aa534f req-81e36676-7c6a-4890-ab69-e325424a63ab c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] Lock "4adb8b2b-205b-48bc-b01a-3ebb1060faf1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:46:35 compute-0 nova_compute[117413]: 2025-10-08 16:46:35.583 2 DEBUG nova.compute.manager [req-e18f9121-1361-484d-a3d5-cb0047aa534f req-81e36676-7c6a-4890-ab69-e325424a63ab c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] No waiting events found dispatching network-vif-unplugged-4bc1dd29-8989-40af-9718-f66ae5c0f09b pop_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:344
Oct 08 16:46:35 compute-0 nova_compute[117413]: 2025-10-08 16:46:35.583 2 DEBUG nova.compute.manager [req-e18f9121-1361-484d-a3d5-cb0047aa534f req-81e36676-7c6a-4890-ab69-e325424a63ab c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Received event network-vif-unplugged-4bc1dd29-8989-40af-9718-f66ae5c0f09b for instance with task_state deleting. _process_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11590
Oct 08 16:46:36 compute-0 nova_compute[117413]: 2025-10-08 16:46:36.954 2 DEBUG nova.network.neutron [-] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.12/site-packages/nova/network/neutron.py:116
Oct 08 16:46:37 compute-0 nova_compute[117413]: 2025-10-08 16:46:37.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:37 compute-0 unix_chkpwd[154882]: password check failed for user (root)
Oct 08 16:46:37 compute-0 nova_compute[117413]: 2025-10-08 16:46:37.464 2 INFO nova.compute.manager [-] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Took 2.87 seconds to deallocate network for instance.
Oct 08 16:46:37 compute-0 nova_compute[117413]: 2025-10-08 16:46:37.657 2 DEBUG nova.compute.manager [req-925d8757-98ec-4ab4-91e7-cd0223f5a565 req-97133658-f0a6-4c1d-8745-1b673f8c0be1 c716cdec407f4995a49a9f3de687b862 5e690a68a5624a97b59c058430039321 - - default default] [instance: 4adb8b2b-205b-48bc-b01a-3ebb1060faf1] Received event network-vif-deleted-4bc1dd29-8989-40af-9718-f66ae5c0f09b external_instance_event /usr/lib/python3.12/site-packages/nova/compute/manager.py:11812
Oct 08 16:46:38 compute-0 nova_compute[117413]: 2025-10-08 16:46:38.125 2 DEBUG oslo_concurrency.lockutils [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:46:38 compute-0 nova_compute[117413]: 2025-10-08 16:46:38.125 2 DEBUG oslo_concurrency.lockutils [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:46:38 compute-0 nova_compute[117413]: 2025-10-08 16:46:38.182 2 DEBUG nova.compute.provider_tree [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:46:38 compute-0 nova_compute[117413]: 2025-10-08 16:46:38.694 2 DEBUG nova.scheduler.client.report [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:46:39 compute-0 nova_compute[117413]: 2025-10-08 16:46:39.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:39 compute-0 sshd-session[154736]: Failed password for root from 193.46.255.159 port 33772 ssh2
Oct 08 16:46:40 compute-0 nova_compute[117413]: 2025-10-08 16:46:40.621 2 DEBUG oslo_concurrency.lockutils [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.496s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:46:40 compute-0 nova_compute[117413]: 2025-10-08 16:46:40.828 2 INFO nova.scheduler.client.report [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Deleted allocations for instance 4adb8b2b-205b-48bc-b01a-3ebb1060faf1
Oct 08 16:46:41 compute-0 unix_chkpwd[154883]: password check failed for user (root)
Oct 08 16:46:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:41.944 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:46:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:41.944 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:46:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:46:41.944 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:46:42 compute-0 nova_compute[117413]: 2025-10-08 16:46:42.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:42 compute-0 nova_compute[117413]: 2025-10-08 16:46:42.153 2 DEBUG oslo_concurrency.lockutils [None req-926b7670-3dc6-4a9e-afc5-0213c92f904e 7560d8247c7549c9a1a5774b411e593f cc10ca4f587446c896aeb3ac8d6a1fea - - default default] Lock "4adb8b2b-205b-48bc-b01a-3ebb1060faf1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.528s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:46:43 compute-0 nova_compute[117413]: 2025-10-08 16:46:43.359 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:46:43 compute-0 sshd-session[154736]: Failed password for root from 193.46.255.159 port 33772 ssh2
Oct 08 16:46:44 compute-0 nova_compute[117413]: 2025-10-08 16:46:44.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:44 compute-0 sshd-session[154736]: Received disconnect from 193.46.255.159 port 33772:11:  [preauth]
Oct 08 16:46:44 compute-0 sshd-session[154736]: Disconnected from authenticating user root 193.46.255.159 port 33772 [preauth]
Oct 08 16:46:44 compute-0 sshd-session[154736]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Oct 08 16:46:45 compute-0 unix_chkpwd[154887]: password check failed for user (root)
Oct 08 16:46:45 compute-0 sshd-session[154885]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Oct 08 16:46:47 compute-0 nova_compute[117413]: 2025-10-08 16:46:47.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:47 compute-0 podman[154888]: 2025-10-08 16:46:47.489967283 +0000 UTC m=+0.089891932 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 08 16:46:47 compute-0 nova_compute[117413]: 2025-10-08 16:46:47.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:47 compute-0 sshd-session[154885]: Failed password for root from 193.46.255.159 port 32704 ssh2
Oct 08 16:46:49 compute-0 nova_compute[117413]: 2025-10-08 16:46:49.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:49 compute-0 unix_chkpwd[154909]: password check failed for user (root)
Oct 08 16:46:51 compute-0 sshd-session[154885]: Failed password for root from 193.46.255.159 port 32704 ssh2
Oct 08 16:46:52 compute-0 nova_compute[117413]: 2025-10-08 16:46:52.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:53 compute-0 podman[154911]: 2025-10-08 16:46:53.466319242 +0000 UTC m=+0.069198748 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, container_name=openstack_network_exporter, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git)
Oct 08 16:46:53 compute-0 unix_chkpwd[154933]: password check failed for user (root)
Oct 08 16:46:54 compute-0 nova_compute[117413]: 2025-10-08 16:46:54.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:55 compute-0 sshd-session[154885]: Failed password for root from 193.46.255.159 port 32704 ssh2
Oct 08 16:46:57 compute-0 nova_compute[117413]: 2025-10-08 16:46:57.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:57 compute-0 sshd-session[154885]: Received disconnect from 193.46.255.159 port 32704:11:  [preauth]
Oct 08 16:46:57 compute-0 sshd-session[154885]: Disconnected from authenticating user root 193.46.255.159 port 32704 [preauth]
Oct 08 16:46:57 compute-0 sshd-session[154885]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Oct 08 16:46:58 compute-0 unix_chkpwd[154936]: password check failed for user (root)
Oct 08 16:46:58 compute-0 sshd-session[154934]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Oct 08 16:46:59 compute-0 nova_compute[117413]: 2025-10-08 16:46:59.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:46:59 compute-0 podman[127881]: time="2025-10-08T16:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:46:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:46:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3033 "" "Go-http-client/1.1"
Oct 08 16:46:59 compute-0 sshd-session[154934]: Failed password for root from 193.46.255.159 port 42482 ssh2
Oct 08 16:47:00 compute-0 unix_chkpwd[154937]: password check failed for user (root)
Oct 08 16:47:00 compute-0 podman[154939]: 2025-10-08 16:47:00.486815104 +0000 UTC m=+0.081063878 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:47:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:47:00.497 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:e7:54 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-2920a1da-0103-4067-8ff6-501b12e6af20', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2920a1da-0103-4067-8ff6-501b12e6af20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07787ff1e66943db9d2dd83bc996bd8c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7ca742b-aef6-4cda-b3c3-28b8fe3a7e99, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=78a2f0ad-de16-4bb3-bc0d-e7160b171276) old=Port_Binding(mac=['fa:16:3e:5a:e7:54'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-2920a1da-0103-4067-8ff6-501b12e6af20', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2920a1da-0103-4067-8ff6-501b12e6af20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07787ff1e66943db9d2dd83bc996bd8c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:47:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:47:00.498 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 78a2f0ad-de16-4bb3-bc0d-e7160b171276 in datapath 2920a1da-0103-4067-8ff6-501b12e6af20 updated
Oct 08 16:47:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:47:00.499 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2920a1da-0103-4067-8ff6-501b12e6af20, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:47:00 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:47:00.500 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[e5da8120-37d7-4f0b-a0ed-d1ce5726762f]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:47:00 compute-0 podman[154938]: 2025-10-08 16:47:00.521999884 +0000 UTC m=+0.109481135 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:47:01 compute-0 openstack_network_exporter[130039]: ERROR   16:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:47:01 compute-0 openstack_network_exporter[130039]: ERROR   16:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:47:01 compute-0 openstack_network_exporter[130039]: ERROR   16:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:47:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:47:01 compute-0 openstack_network_exporter[130039]: ERROR   16:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:47:01 compute-0 openstack_network_exporter[130039]: ERROR   16:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:47:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:47:02 compute-0 nova_compute[117413]: 2025-10-08 16:47:02.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:02 compute-0 sshd-session[154934]: Failed password for root from 193.46.255.159 port 42482 ssh2
Oct 08 16:47:04 compute-0 unix_chkpwd[154976]: password check failed for user (root)
Oct 08 16:47:04 compute-0 nova_compute[117413]: 2025-10-08 16:47:04.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:06 compute-0 sshd-session[154934]: Failed password for root from 193.46.255.159 port 42482 ssh2
Oct 08 16:47:06 compute-0 podman[154977]: 2025-10-08 16:47:06.476480956 +0000 UTC m=+0.077961489 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 08 16:47:06 compute-0 podman[154978]: 2025-10-08 16:47:06.511061579 +0000 UTC m=+0.106104508 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:47:07 compute-0 nova_compute[117413]: 2025-10-08 16:47:07.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:07 compute-0 sshd-session[154934]: Received disconnect from 193.46.255.159 port 42482:11:  [preauth]
Oct 08 16:47:07 compute-0 sshd-session[154934]: Disconnected from authenticating user root 193.46.255.159 port 42482 [preauth]
Oct 08 16:47:07 compute-0 sshd-session[154934]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Oct 08 16:47:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:47:08.509 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f1:23:43 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-7054e31b-fcdb-4a56-ad03-d6171ef4c11c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7054e31b-fcdb-4a56-ad03-d6171ef4c11c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f265ae2801741deb1da9976fd7d4909', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff2bc5ea-a5b8-4c9d-966f-9fe9a94dd029, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=29d6dbc1-a897-4f7a-93af-744d8b66b87f) old=Port_Binding(mac=['fa:16:3e:f1:23:43'], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-7054e31b-fcdb-4a56-ad03-d6171ef4c11c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7054e31b-fcdb-4a56-ad03-d6171ef4c11c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f265ae2801741deb1da9976fd7d4909', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:47:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:47:08.510 28633 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 29d6dbc1-a897-4f7a-93af-744d8b66b87f in datapath 7054e31b-fcdb-4a56-ad03-d6171ef4c11c updated
Oct 08 16:47:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:47:08.512 28633 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7054e31b-fcdb-4a56-ad03-d6171ef4c11c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:756
Oct 08 16:47:08 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:47:08.513 139805 DEBUG oslo.privsep.daemon [-] privsep: reply[dca228bb-f211-4328-969e-56c20c6c2e52]: (4, False) _call_back /usr/lib/python3.12/site-packages/oslo_privsep/daemon.py:515
Oct 08 16:47:09 compute-0 nova_compute[117413]: 2025-10-08 16:47:09.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:11 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:47:11.665 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:47:11 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:47:11.666 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:47:11 compute-0 nova_compute[117413]: 2025-10-08 16:47:11.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:12 compute-0 nova_compute[117413]: 2025-10-08 16:47:12.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:14 compute-0 nova_compute[117413]: 2025-10-08 16:47:14.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:17 compute-0 nova_compute[117413]: 2025-10-08 16:47:17.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:18 compute-0 podman[155026]: 2025-10-08 16:47:18.504432589 +0000 UTC m=+0.097676946 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251007)
Oct 08 16:47:19 compute-0 nova_compute[117413]: 2025-10-08 16:47:19.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:19 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:47:19.667 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:47:20 compute-0 nova_compute[117413]: 2025-10-08 16:47:20.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:47:21 compute-0 nova_compute[117413]: 2025-10-08 16:47:21.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:47:21 compute-0 nova_compute[117413]: 2025-10-08 16:47:21.879 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:47:21 compute-0 nova_compute[117413]: 2025-10-08 16:47:21.880 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:47:21 compute-0 nova_compute[117413]: 2025-10-08 16:47:21.880 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:47:21 compute-0 nova_compute[117413]: 2025-10-08 16:47:21.880 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:47:22 compute-0 nova_compute[117413]: 2025-10-08 16:47:22.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:22 compute-0 nova_compute[117413]: 2025-10-08 16:47:22.130 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:47:22 compute-0 nova_compute[117413]: 2025-10-08 16:47:22.131 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:47:22 compute-0 nova_compute[117413]: 2025-10-08 16:47:22.168 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.037s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:47:22 compute-0 nova_compute[117413]: 2025-10-08 16:47:22.169 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6133MB free_disk=73.24159240722656GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:47:22 compute-0 nova_compute[117413]: 2025-10-08 16:47:22.170 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:47:22 compute-0 nova_compute[117413]: 2025-10-08 16:47:22.170 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:47:23 compute-0 nova_compute[117413]: 2025-10-08 16:47:23.224 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:47:23 compute-0 nova_compute[117413]: 2025-10-08 16:47:23.224 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:47:22 up 55 min,  0 user,  load average: 0.27, 0.28, 0.22\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:47:23 compute-0 nova_compute[117413]: 2025-10-08 16:47:23.245 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:47:23 compute-0 nova_compute[117413]: 2025-10-08 16:47:23.752 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:47:24 compute-0 nova_compute[117413]: 2025-10-08 16:47:24.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:24 compute-0 ovn_controller[19768]: 2025-10-08T16:47:24Z|00298|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Oct 08 16:47:24 compute-0 nova_compute[117413]: 2025-10-08 16:47:24.264 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:47:24 compute-0 nova_compute[117413]: 2025-10-08 16:47:24.265 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.095s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:47:24 compute-0 podman[155048]: 2025-10-08 16:47:24.485293369 +0000 UTC m=+0.078710551 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=)
Oct 08 16:47:26 compute-0 nova_compute[117413]: 2025-10-08 16:47:26.260 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:47:26 compute-0 nova_compute[117413]: 2025-10-08 16:47:26.261 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:47:26 compute-0 nova_compute[117413]: 2025-10-08 16:47:26.261 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:47:27 compute-0 nova_compute[117413]: 2025-10-08 16:47:27.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:27 compute-0 nova_compute[117413]: 2025-10-08 16:47:27.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:47:27 compute-0 nova_compute[117413]: 2025-10-08 16:47:27.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:47:29 compute-0 nova_compute[117413]: 2025-10-08 16:47:29.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:29 compute-0 nova_compute[117413]: 2025-10-08 16:47:29.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:47:29 compute-0 podman[127881]: time="2025-10-08T16:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:47:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:47:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3035 "" "Go-http-client/1.1"
Oct 08 16:47:31 compute-0 nova_compute[117413]: 2025-10-08 16:47:31.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:47:31 compute-0 openstack_network_exporter[130039]: ERROR   16:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:47:31 compute-0 openstack_network_exporter[130039]: ERROR   16:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:47:31 compute-0 openstack_network_exporter[130039]: ERROR   16:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:47:31 compute-0 openstack_network_exporter[130039]: ERROR   16:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:47:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:47:31 compute-0 openstack_network_exporter[130039]: ERROR   16:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:47:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:47:31 compute-0 podman[155070]: 2025-10-08 16:47:31.466307256 +0000 UTC m=+0.071507356 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 08 16:47:31 compute-0 podman[155069]: 2025-10-08 16:47:31.500950721 +0000 UTC m=+0.108378285 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=iscsid)
Oct 08 16:47:32 compute-0 nova_compute[117413]: 2025-10-08 16:47:32.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:34 compute-0 nova_compute[117413]: 2025-10-08 16:47:34.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:37 compute-0 nova_compute[117413]: 2025-10-08 16:47:37.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:37 compute-0 podman[155107]: 2025-10-08 16:47:37.496478833 +0000 UTC m=+0.093083764 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 08 16:47:37 compute-0 podman[155108]: 2025-10-08 16:47:37.557385002 +0000 UTC m=+0.148982879 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 08 16:47:39 compute-0 nova_compute[117413]: 2025-10-08 16:47:39.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:47:41.945 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:47:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:47:41.945 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:47:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:47:41.945 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:47:42 compute-0 nova_compute[117413]: 2025-10-08 16:47:42.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:44 compute-0 nova_compute[117413]: 2025-10-08 16:47:44.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:47 compute-0 nova_compute[117413]: 2025-10-08 16:47:47.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:49 compute-0 nova_compute[117413]: 2025-10-08 16:47:49.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:49 compute-0 podman[155158]: 2025-10-08 16:47:49.488960186 +0000 UTC m=+0.090533871 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:47:52 compute-0 nova_compute[117413]: 2025-10-08 16:47:52.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:54 compute-0 nova_compute[117413]: 2025-10-08 16:47:54.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:55 compute-0 podman[155182]: 2025-10-08 16:47:55.489248415 +0000 UTC m=+0.080469642 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-type=git, version=9.6, container_name=openstack_network_exporter, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 08 16:47:57 compute-0 nova_compute[117413]: 2025-10-08 16:47:57.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:59 compute-0 nova_compute[117413]: 2025-10-08 16:47:59.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:47:59 compute-0 podman[127881]: time="2025-10-08T16:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:47:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:47:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3029 "" "Go-http-client/1.1"
Oct 08 16:48:01 compute-0 openstack_network_exporter[130039]: ERROR   16:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:48:01 compute-0 openstack_network_exporter[130039]: ERROR   16:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:48:01 compute-0 openstack_network_exporter[130039]: ERROR   16:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:48:01 compute-0 openstack_network_exporter[130039]: ERROR   16:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:48:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:48:01 compute-0 openstack_network_exporter[130039]: ERROR   16:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:48:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:48:02 compute-0 nova_compute[117413]: 2025-10-08 16:48:02.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:02 compute-0 podman[155207]: 2025-10-08 16:48:02.48100002 +0000 UTC m=+0.079690839 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Oct 08 16:48:02 compute-0 podman[155206]: 2025-10-08 16:48:02.514911104 +0000 UTC m=+0.115765205 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=watcher_latest)
Oct 08 16:48:04 compute-0 nova_compute[117413]: 2025-10-08 16:48:04.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:07 compute-0 nova_compute[117413]: 2025-10-08 16:48:07.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:08 compute-0 podman[155245]: 2025-10-08 16:48:08.475642035 +0000 UTC m=+0.071426262 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:48:08 compute-0 podman[155246]: 2025-10-08 16:48:08.533411143 +0000 UTC m=+0.126809251 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct 08 16:48:09 compute-0 nova_compute[117413]: 2025-10-08 16:48:09.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:10 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 08 16:48:12 compute-0 nova_compute[117413]: 2025-10-08 16:48:12.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:14 compute-0 nova_compute[117413]: 2025-10-08 16:48:14.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:17 compute-0 nova_compute[117413]: 2025-10-08 16:48:17.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:19 compute-0 nova_compute[117413]: 2025-10-08 16:48:19.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:20 compute-0 podman[155295]: 2025-10-08 16:48:20.475740187 +0000 UTC m=+0.081664786 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.4, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 16:48:21 compute-0 nova_compute[117413]: 2025-10-08 16:48:21.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:48:21 compute-0 nova_compute[117413]: 2025-10-08 16:48:21.876 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:48:21 compute-0 nova_compute[117413]: 2025-10-08 16:48:21.877 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:48:21 compute-0 nova_compute[117413]: 2025-10-08 16:48:21.877 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:48:21 compute-0 nova_compute[117413]: 2025-10-08 16:48:21.877 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:48:22 compute-0 nova_compute[117413]: 2025-10-08 16:48:22.022 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:48:22 compute-0 nova_compute[117413]: 2025-10-08 16:48:22.023 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:48:22 compute-0 nova_compute[117413]: 2025-10-08 16:48:22.044 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.021s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:48:22 compute-0 nova_compute[117413]: 2025-10-08 16:48:22.045 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6163MB free_disk=73.24161148071289GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:48:22 compute-0 nova_compute[117413]: 2025-10-08 16:48:22.045 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:48:22 compute-0 nova_compute[117413]: 2025-10-08 16:48:22.045 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:48:22 compute-0 nova_compute[117413]: 2025-10-08 16:48:22.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:23 compute-0 nova_compute[117413]: 2025-10-08 16:48:23.096 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:48:23 compute-0 nova_compute[117413]: 2025-10-08 16:48:23.097 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:48:22 up 56 min,  0 user,  load average: 0.18, 0.24, 0.21\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:48:23 compute-0 nova_compute[117413]: 2025-10-08 16:48:23.117 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:48:23 compute-0 nova_compute[117413]: 2025-10-08 16:48:23.625 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:48:24 compute-0 nova_compute[117413]: 2025-10-08 16:48:24.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:24 compute-0 nova_compute[117413]: 2025-10-08 16:48:24.133 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:48:24 compute-0 nova_compute[117413]: 2025-10-08 16:48:24.134 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.089s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:48:25 compute-0 nova_compute[117413]: 2025-10-08 16:48:25.134 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:48:25 compute-0 nova_compute[117413]: 2025-10-08 16:48:25.135 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:48:26 compute-0 nova_compute[117413]: 2025-10-08 16:48:26.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:48:26 compute-0 nova_compute[117413]: 2025-10-08 16:48:26.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:48:26 compute-0 podman[155316]: 2025-10-08 16:48:26.452805548 +0000 UTC m=+0.061156198 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.6, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 08 16:48:27 compute-0 nova_compute[117413]: 2025-10-08 16:48:27.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:29 compute-0 nova_compute[117413]: 2025-10-08 16:48:29.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:29 compute-0 nova_compute[117413]: 2025-10-08 16:48:29.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:48:29 compute-0 nova_compute[117413]: 2025-10-08 16:48:29.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:48:29 compute-0 podman[127881]: time="2025-10-08T16:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:48:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:48:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3035 "" "Go-http-client/1.1"
Oct 08 16:48:30 compute-0 nova_compute[117413]: 2025-10-08 16:48:30.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:48:31 compute-0 nova_compute[117413]: 2025-10-08 16:48:31.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:48:31 compute-0 openstack_network_exporter[130039]: ERROR   16:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:48:31 compute-0 openstack_network_exporter[130039]: ERROR   16:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:48:31 compute-0 openstack_network_exporter[130039]: ERROR   16:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:48:31 compute-0 openstack_network_exporter[130039]: ERROR   16:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:48:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:48:31 compute-0 openstack_network_exporter[130039]: ERROR   16:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:48:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:48:32 compute-0 nova_compute[117413]: 2025-10-08 16:48:32.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:33 compute-0 podman[155340]: 2025-10-08 16:48:33.455577391 +0000 UTC m=+0.053227479 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Oct 08 16:48:33 compute-0 podman[155339]: 2025-10-08 16:48:33.482932796 +0000 UTC m=+0.088201313 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Oct 08 16:48:34 compute-0 nova_compute[117413]: 2025-10-08 16:48:34.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:37 compute-0 nova_compute[117413]: 2025-10-08 16:48:37.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:39 compute-0 nova_compute[117413]: 2025-10-08 16:48:39.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:39 compute-0 podman[155382]: 2025-10-08 16:48:39.442748593 +0000 UTC m=+0.050171133 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 08 16:48:39 compute-0 podman[155383]: 2025-10-08 16:48:39.485788848 +0000 UTC m=+0.093351362 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251007, container_name=ovn_controller)
Oct 08 16:48:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:48:41.946 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:48:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:48:41.947 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:48:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:48:41.947 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:48:42 compute-0 nova_compute[117413]: 2025-10-08 16:48:42.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:48:42.564 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:48:42 compute-0 nova_compute[117413]: 2025-10-08 16:48:42.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:42 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:48:42.566 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:48:43 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:48:43.567 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:48:44 compute-0 nova_compute[117413]: 2025-10-08 16:48:44.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:44 compute-0 nova_compute[117413]: 2025-10-08 16:48:44.358 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:48:47 compute-0 nova_compute[117413]: 2025-10-08 16:48:47.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:49 compute-0 nova_compute[117413]: 2025-10-08 16:48:49.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:51 compute-0 podman[155431]: 2025-10-08 16:48:51.458669548 +0000 UTC m=+0.065293476 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd)
Oct 08 16:48:52 compute-0 nova_compute[117413]: 2025-10-08 16:48:52.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:54 compute-0 nova_compute[117413]: 2025-10-08 16:48:54.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:57 compute-0 nova_compute[117413]: 2025-10-08 16:48:57.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:57 compute-0 podman[155452]: 2025-10-08 16:48:57.279206635 +0000 UTC m=+0.103251266 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 08 16:48:59 compute-0 nova_compute[117413]: 2025-10-08 16:48:59.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:48:59 compute-0 podman[127881]: time="2025-10-08T16:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:48:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:48:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3030 "" "Go-http-client/1.1"
Oct 08 16:49:01 compute-0 openstack_network_exporter[130039]: ERROR   16:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:49:01 compute-0 openstack_network_exporter[130039]: ERROR   16:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:49:01 compute-0 openstack_network_exporter[130039]: ERROR   16:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:49:01 compute-0 openstack_network_exporter[130039]: ERROR   16:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:49:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:49:01 compute-0 openstack_network_exporter[130039]: ERROR   16:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:49:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:49:02 compute-0 nova_compute[117413]: 2025-10-08 16:49:02.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:04 compute-0 nova_compute[117413]: 2025-10-08 16:49:04.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:04 compute-0 podman[155474]: 2025-10-08 16:49:04.464032626 +0000 UTC m=+0.066388458 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:49:04 compute-0 podman[155473]: 2025-10-08 16:49:04.485301046 +0000 UTC m=+0.089449139 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct 08 16:49:07 compute-0 nova_compute[117413]: 2025-10-08 16:49:07.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:09 compute-0 nova_compute[117413]: 2025-10-08 16:49:09.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:10 compute-0 podman[155513]: 2025-10-08 16:49:10.481236919 +0000 UTC m=+0.079793642 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 08 16:49:10 compute-0 podman[155514]: 2025-10-08 16:49:10.501147291 +0000 UTC m=+0.101736593 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007)
Oct 08 16:49:12 compute-0 nova_compute[117413]: 2025-10-08 16:49:12.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:14 compute-0 nova_compute[117413]: 2025-10-08 16:49:14.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:17 compute-0 nova_compute[117413]: 2025-10-08 16:49:17.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:19 compute-0 nova_compute[117413]: 2025-10-08 16:49:19.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:22 compute-0 nova_compute[117413]: 2025-10-08 16:49:22.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:22 compute-0 nova_compute[117413]: 2025-10-08 16:49:22.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:49:22 compute-0 nova_compute[117413]: 2025-10-08 16:49:22.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:49:22 compute-0 podman[155565]: 2025-10-08 16:49:22.451225906 +0000 UTC m=+0.057464371 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=multipathd, org.label-schema.license=GPLv2)
Oct 08 16:49:22 compute-0 nova_compute[117413]: 2025-10-08 16:49:22.878 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:49:22 compute-0 nova_compute[117413]: 2025-10-08 16:49:22.879 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:49:22 compute-0 nova_compute[117413]: 2025-10-08 16:49:22.879 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:49:22 compute-0 nova_compute[117413]: 2025-10-08 16:49:22.880 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:49:23 compute-0 nova_compute[117413]: 2025-10-08 16:49:23.054 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:49:23 compute-0 nova_compute[117413]: 2025-10-08 16:49:23.055 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:49:23 compute-0 nova_compute[117413]: 2025-10-08 16:49:23.082 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.027s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:49:23 compute-0 nova_compute[117413]: 2025-10-08 16:49:23.083 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6175MB free_disk=73.24161148071289GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:49:23 compute-0 nova_compute[117413]: 2025-10-08 16:49:23.083 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:49:23 compute-0 nova_compute[117413]: 2025-10-08 16:49:23.084 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:49:24 compute-0 nova_compute[117413]: 2025-10-08 16:49:24.127 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:49:24 compute-0 nova_compute[117413]: 2025-10-08 16:49:24.127 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:49:23 up 57 min,  0 user,  load average: 0.06, 0.20, 0.19\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:49:24 compute-0 nova_compute[117413]: 2025-10-08 16:49:24.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:24 compute-0 nova_compute[117413]: 2025-10-08 16:49:24.148 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:49:24 compute-0 nova_compute[117413]: 2025-10-08 16:49:24.656 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:49:25 compute-0 nova_compute[117413]: 2025-10-08 16:49:25.168 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:49:25 compute-0 nova_compute[117413]: 2025-10-08 16:49:25.169 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.085s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:49:27 compute-0 nova_compute[117413]: 2025-10-08 16:49:27.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:27 compute-0 nova_compute[117413]: 2025-10-08 16:49:27.164 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:49:27 compute-0 nova_compute[117413]: 2025-10-08 16:49:27.165 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:49:27 compute-0 nova_compute[117413]: 2025-10-08 16:49:27.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:49:27 compute-0 podman[155586]: 2025-10-08 16:49:27.472246896 +0000 UTC m=+0.072931316 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal, maintainer=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Oct 08 16:49:29 compute-0 nova_compute[117413]: 2025-10-08 16:49:29.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:29 compute-0 podman[127881]: time="2025-10-08T16:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:49:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:49:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3037 "" "Go-http-client/1.1"
Oct 08 16:49:30 compute-0 nova_compute[117413]: 2025-10-08 16:49:30.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:49:30 compute-0 nova_compute[117413]: 2025-10-08 16:49:30.363 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:49:31 compute-0 nova_compute[117413]: 2025-10-08 16:49:31.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:49:31 compute-0 nova_compute[117413]: 2025-10-08 16:49:31.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:49:31 compute-0 openstack_network_exporter[130039]: ERROR   16:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:49:31 compute-0 openstack_network_exporter[130039]: ERROR   16:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:49:31 compute-0 openstack_network_exporter[130039]: ERROR   16:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:49:31 compute-0 openstack_network_exporter[130039]: ERROR   16:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:49:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:49:31 compute-0 openstack_network_exporter[130039]: ERROR   16:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:49:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:49:32 compute-0 nova_compute[117413]: 2025-10-08 16:49:32.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:34 compute-0 nova_compute[117413]: 2025-10-08 16:49:34.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:35 compute-0 podman[155609]: 2025-10-08 16:49:35.454848723 +0000 UTC m=+0.056187665 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 08 16:49:35 compute-0 podman[155608]: 2025-10-08 16:49:35.469859434 +0000 UTC m=+0.072352379 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251007, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 08 16:49:37 compute-0 nova_compute[117413]: 2025-10-08 16:49:37.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:39 compute-0 nova_compute[117413]: 2025-10-08 16:49:39.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:41 compute-0 sshd-session[155648]: Accepted publickey for zuul from 192.168.122.10 port 35392 ssh2: ECDSA SHA256:ZIjHNHNxAuv0z7dTwV8SzPT4xe1+IFvqH/0VmHWdIl4
Oct 08 16:49:41 compute-0 systemd[1]: Created slice User Slice of UID 1000.
Oct 08 16:49:41 compute-0 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 08 16:49:41 compute-0 systemd-logind[847]: New session 19 of user zuul.
Oct 08 16:49:41 compute-0 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 08 16:49:41 compute-0 podman[155650]: 2025-10-08 16:49:41.365937761 +0000 UTC m=+0.077020403 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:49:41 compute-0 systemd[1]: Starting User Manager for UID 1000...
Oct 08 16:49:41 compute-0 systemd[155690]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 16:49:41 compute-0 podman[155652]: 2025-10-08 16:49:41.45506019 +0000 UTC m=+0.152734187 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 16:49:41 compute-0 systemd[155690]: Queued start job for default target Main User Target.
Oct 08 16:49:41 compute-0 systemd[155690]: Created slice User Application Slice.
Oct 08 16:49:41 compute-0 systemd[155690]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 08 16:49:41 compute-0 systemd[155690]: Started Daily Cleanup of User's Temporary Directories.
Oct 08 16:49:41 compute-0 systemd[155690]: Reached target Paths.
Oct 08 16:49:41 compute-0 systemd[155690]: Reached target Timers.
Oct 08 16:49:41 compute-0 systemd[155690]: Starting D-Bus User Message Bus Socket...
Oct 08 16:49:41 compute-0 systemd[155690]: Starting Create User's Volatile Files and Directories...
Oct 08 16:49:41 compute-0 systemd[155690]: Listening on D-Bus User Message Bus Socket.
Oct 08 16:49:41 compute-0 systemd[155690]: Reached target Sockets.
Oct 08 16:49:41 compute-0 systemd[155690]: Finished Create User's Volatile Files and Directories.
Oct 08 16:49:41 compute-0 systemd[155690]: Reached target Basic System.
Oct 08 16:49:41 compute-0 systemd[155690]: Reached target Main User Target.
Oct 08 16:49:41 compute-0 systemd[155690]: Startup finished in 175ms.
Oct 08 16:49:41 compute-0 systemd[1]: Started User Manager for UID 1000.
Oct 08 16:49:41 compute-0 systemd[1]: Started Session 19 of User zuul.
Oct 08 16:49:41 compute-0 sshd-session[155648]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 16:49:41 compute-0 sudo[155718]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 08 16:49:41 compute-0 sudo[155718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:49:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:49:41.948 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:49:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:49:41.950 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:49:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:49:41.950 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:49:42 compute-0 nova_compute[117413]: 2025-10-08 16:49:42.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:44 compute-0 nova_compute[117413]: 2025-10-08 16:49:44.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:47 compute-0 nova_compute[117413]: 2025-10-08 16:49:47.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:47 compute-0 ovs-vsctl[155889]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 08 16:49:48 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 155742 (sos)
Oct 08 16:49:48 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 08 16:49:48 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 08 16:49:48 compute-0 nova_compute[117413]: 2025-10-08 16:49:48.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:49:48 compute-0 nova_compute[117413]: 2025-10-08 16:49:48.364 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Oct 08 16:49:48 compute-0 virtqemud[117740]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 08 16:49:48 compute-0 virtqemud[117740]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 08 16:49:48 compute-0 virtqemud[117740]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 08 16:49:49 compute-0 nova_compute[117413]: 2025-10-08 16:49:49.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:49 compute-0 nova_compute[117413]: 2025-10-08 16:49:49.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:49:49 compute-0 kernel: block vda: the capability attribute has been deprecated.
Oct 08 16:49:49 compute-0 crontab[156318]: (root) LIST (root)
Oct 08 16:49:49 compute-0 rsyslogd[1296]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 16:49:49 compute-0 rsyslogd[1296]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 08 16:49:52 compute-0 systemd[1]: Starting Hostname Service...
Oct 08 16:49:52 compute-0 nova_compute[117413]: 2025-10-08 16:49:52.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:52 compute-0 systemd[1]: Started Hostname Service.
Oct 08 16:49:52 compute-0 nova_compute[117413]: 2025-10-08 16:49:52.902 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:49:52 compute-0 nova_compute[117413]: 2025-10-08 16:49:52.902 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Oct 08 16:49:53 compute-0 nova_compute[117413]: 2025-10-08 16:49:53.410 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Oct 08 16:49:53 compute-0 podman[156490]: 2025-10-08 16:49:53.434848598 +0000 UTC m=+0.062777053 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.4)
Oct 08 16:49:54 compute-0 nova_compute[117413]: 2025-10-08 16:49:54.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:56 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 08 16:49:56 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 08 16:49:56 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct 08 16:49:56 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 08 16:49:56 compute-0 kernel: cfg80211: failed to load regulatory.db
Oct 08 16:49:57 compute-0 nova_compute[117413]: 2025-10-08 16:49:57.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:58 compute-0 ovs-appctl[157445]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 08 16:49:58 compute-0 ovs-appctl[157464]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 08 16:49:58 compute-0 podman[157417]: 2025-10-08 16:49:58.481313639 +0000 UTC m=+0.077705862 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.expose-services=, name=ubi9-minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 08 16:49:58 compute-0 ovs-appctl[157481]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 08 16:49:59 compute-0 nova_compute[117413]: 2025-10-08 16:49:59.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:49:59 compute-0 podman[127881]: time="2025-10-08T16:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:49:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:49:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3037 "" "Go-http-client/1.1"
Oct 08 16:50:01 compute-0 openstack_network_exporter[130039]: ERROR   16:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:50:01 compute-0 openstack_network_exporter[130039]: ERROR   16:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:50:01 compute-0 openstack_network_exporter[130039]: ERROR   16:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:50:01 compute-0 openstack_network_exporter[130039]: ERROR   16:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:50:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:50:01 compute-0 openstack_network_exporter[130039]: ERROR   16:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:50:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:50:02 compute-0 nova_compute[117413]: 2025-10-08 16:50:02.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:02 compute-0 systemd[1]: Starting system activity accounting tool...
Oct 08 16:50:02 compute-0 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct 08 16:50:02 compute-0 systemd[1]: Finished system activity accounting tool.
Oct 08 16:50:04 compute-0 nova_compute[117413]: 2025-10-08 16:50:04.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:05 compute-0 podman[158479]: 2025-10-08 16:50:05.887550067 +0000 UTC m=+0.076446706 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=watcher_latest, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:50:05 compute-0 podman[158476]: 2025-10-08 16:50:05.924464317 +0000 UTC m=+0.104067079 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_id=iscsid, container_name=iscsid, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:50:07 compute-0 virtqemud[117740]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 08 16:50:07 compute-0 nova_compute[117413]: 2025-10-08 16:50:07.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:09 compute-0 systemd[1]: Starting Time & Date Service...
Oct 08 16:50:09 compute-0 systemd[1]: Started Time & Date Service.
Oct 08 16:50:09 compute-0 nova_compute[117413]: 2025-10-08 16:50:09.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:12 compute-0 nova_compute[117413]: 2025-10-08 16:50:12.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:12 compute-0 podman[158950]: 2025-10-08 16:50:12.338119294 +0000 UTC m=+0.083496759 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 08 16:50:12 compute-0 podman[158951]: 2025-10-08 16:50:12.377144314 +0000 UTC m=+0.123029034 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS)
Oct 08 16:50:14 compute-0 nova_compute[117413]: 2025-10-08 16:50:14.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:17 compute-0 nova_compute[117413]: 2025-10-08 16:50:17.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:19 compute-0 nova_compute[117413]: 2025-10-08 16:50:19.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:22 compute-0 nova_compute[117413]: 2025-10-08 16:50:22.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:24 compute-0 nova_compute[117413]: 2025-10-08 16:50:24.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:24 compute-0 podman[159003]: 2025-10-08 16:50:24.460923178 +0000 UTC m=+0.068747894 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 08 16:50:24 compute-0 nova_compute[117413]: 2025-10-08 16:50:24.866 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:50:24 compute-0 nova_compute[117413]: 2025-10-08 16:50:24.867 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:50:24 compute-0 nova_compute[117413]: 2025-10-08 16:50:24.867 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:50:25 compute-0 nova_compute[117413]: 2025-10-08 16:50:25.402 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:50:25 compute-0 nova_compute[117413]: 2025-10-08 16:50:25.403 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:50:25 compute-0 nova_compute[117413]: 2025-10-08 16:50:25.403 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:50:25 compute-0 nova_compute[117413]: 2025-10-08 16:50:25.403 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:50:25 compute-0 nova_compute[117413]: 2025-10-08 16:50:25.563 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:50:25 compute-0 nova_compute[117413]: 2025-10-08 16:50:25.565 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:50:25 compute-0 nova_compute[117413]: 2025-10-08 16:50:25.592 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.027s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:50:25 compute-0 nova_compute[117413]: 2025-10-08 16:50:25.593 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5843MB free_disk=72.99792098999023GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:50:25 compute-0 nova_compute[117413]: 2025-10-08 16:50:25.594 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:50:25 compute-0 nova_compute[117413]: 2025-10-08 16:50:25.594 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:50:26 compute-0 nova_compute[117413]: 2025-10-08 16:50:26.839 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:50:26 compute-0 nova_compute[117413]: 2025-10-08 16:50:26.840 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:50:25 up 58 min,  0 user,  load average: 1.36, 0.53, 0.31\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:50:26 compute-0 sudo[155718]: pam_unix(sudo:session): session closed for user root
Oct 08 16:50:26 compute-0 nova_compute[117413]: 2025-10-08 16:50:26.946 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing inventories for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Oct 08 16:50:26 compute-0 sshd-session[155717]: Received disconnect from 192.168.122.10 port 35392:11: disconnected by user
Oct 08 16:50:26 compute-0 sshd-session[155717]: Disconnected from user zuul 192.168.122.10 port 35392
Oct 08 16:50:26 compute-0 sshd-session[155648]: pam_unix(sshd:session): session closed for user zuul
Oct 08 16:50:26 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Oct 08 16:50:26 compute-0 systemd[1]: session-19.scope: Consumed 1min 16.699s CPU time, 571.6M memory peak, read 173.0M from disk, written 16.5M to disk.
Oct 08 16:50:26 compute-0 systemd-logind[847]: Session 19 logged out. Waiting for processes to exit.
Oct 08 16:50:26 compute-0 systemd-logind[847]: Removed session 19.
Oct 08 16:50:27 compute-0 nova_compute[117413]: 2025-10-08 16:50:27.044 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating ProviderTree inventory for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Oct 08 16:50:27 compute-0 nova_compute[117413]: 2025-10-08 16:50:27.045 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating inventory in ProviderTree for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 08 16:50:27 compute-0 nova_compute[117413]: 2025-10-08 16:50:27.072 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing aggregate associations for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Oct 08 16:50:27 compute-0 sshd-session[159025]: Accepted publickey for zuul from 192.168.122.10 port 56110 ssh2: ECDSA SHA256:ZIjHNHNxAuv0z7dTwV8SzPT4xe1+IFvqH/0VmHWdIl4
Oct 08 16:50:27 compute-0 systemd-logind[847]: New session 21 of user zuul.
Oct 08 16:50:27 compute-0 nova_compute[117413]: 2025-10-08 16:50:27.098 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing trait associations for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8, traits: HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIRTIO_PACKED,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI2,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_SB16,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_ARCH_X86_64,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_SOUND_MODEL_ES1370,HW_ARCH_X86_64,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_VIRTIO_FS,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_TIS,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,HW_CPU_X86_MMX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_IGB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOUND_MODEL_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_SOUND_MODEL_AC97,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ADDRESS_SPACE_EMULATED,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOUND_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_CRB,HW_CPU_X86_SSE42,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_STORAGE_BUS_IDE _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Oct 08 16:50:27 compute-0 systemd[1]: Started Session 21 of User zuul.
Oct 08 16:50:27 compute-0 nova_compute[117413]: 2025-10-08 16:50:27.122 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:50:27 compute-0 sshd-session[159025]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 16:50:27 compute-0 nova_compute[117413]: 2025-10-08 16:50:27.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:27 compute-0 sudo[159029]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-10-08-rybldbi.tar.xz
Oct 08 16:50:27 compute-0 sudo[159029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:50:27 compute-0 sudo[159029]: pam_unix(sudo:session): session closed for user root
Oct 08 16:50:27 compute-0 sshd-session[159028]: Received disconnect from 192.168.122.10 port 56110:11: disconnected by user
Oct 08 16:50:27 compute-0 sshd-session[159028]: Disconnected from user zuul 192.168.122.10 port 56110
Oct 08 16:50:27 compute-0 sshd-session[159025]: pam_unix(sshd:session): session closed for user zuul
Oct 08 16:50:27 compute-0 systemd[1]: session-21.scope: Deactivated successfully.
Oct 08 16:50:27 compute-0 systemd-logind[847]: Session 21 logged out. Waiting for processes to exit.
Oct 08 16:50:27 compute-0 systemd-logind[847]: Removed session 21.
Oct 08 16:50:27 compute-0 sshd-session[159054]: Accepted publickey for zuul from 192.168.122.10 port 56120 ssh2: ECDSA SHA256:ZIjHNHNxAuv0z7dTwV8SzPT4xe1+IFvqH/0VmHWdIl4
Oct 08 16:50:27 compute-0 systemd-logind[847]: New session 22 of user zuul.
Oct 08 16:50:27 compute-0 systemd[1]: Started Session 22 of User zuul.
Oct 08 16:50:27 compute-0 sshd-session[159054]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 16:50:27 compute-0 sudo[159058]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Oct 08 16:50:27 compute-0 sudo[159058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:50:27 compute-0 sudo[159058]: pam_unix(sudo:session): session closed for user root
Oct 08 16:50:27 compute-0 sshd-session[159057]: Received disconnect from 192.168.122.10 port 56120:11: disconnected by user
Oct 08 16:50:27 compute-0 sshd-session[159057]: Disconnected from user zuul 192.168.122.10 port 56120
Oct 08 16:50:27 compute-0 sshd-session[159054]: pam_unix(sshd:session): session closed for user zuul
Oct 08 16:50:27 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Oct 08 16:50:27 compute-0 systemd-logind[847]: Session 22 logged out. Waiting for processes to exit.
Oct 08 16:50:27 compute-0 systemd-logind[847]: Removed session 22.
Oct 08 16:50:27 compute-0 nova_compute[117413]: 2025-10-08 16:50:27.700 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:50:28 compute-0 nova_compute[117413]: 2025-10-08 16:50:28.213 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:50:28 compute-0 nova_compute[117413]: 2025-10-08 16:50:28.214 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.620s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:50:29 compute-0 nova_compute[117413]: 2025-10-08 16:50:29.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:29 compute-0 podman[159084]: 2025-10-08 16:50:29.484000468 +0000 UTC m=+0.084803346 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, io.openshift.expose-services=, config_id=edpm, maintainer=Red Hat, Inc., release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 08 16:50:29 compute-0 podman[127881]: time="2025-10-08T16:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:50:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:50:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3034 "" "Go-http-client/1.1"
Oct 08 16:50:31 compute-0 openstack_network_exporter[130039]: ERROR   16:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:50:31 compute-0 openstack_network_exporter[130039]: ERROR   16:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:50:31 compute-0 openstack_network_exporter[130039]: ERROR   16:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:50:31 compute-0 openstack_network_exporter[130039]: ERROR   16:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:50:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:50:31 compute-0 openstack_network_exporter[130039]: ERROR   16:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:50:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:50:31 compute-0 nova_compute[117413]: 2025-10-08 16:50:31.708 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:50:31 compute-0 nova_compute[117413]: 2025-10-08 16:50:31.709 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:50:31 compute-0 nova_compute[117413]: 2025-10-08 16:50:31.709 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:50:31 compute-0 nova_compute[117413]: 2025-10-08 16:50:31.709 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:50:32 compute-0 nova_compute[117413]: 2025-10-08 16:50:32.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:32 compute-0 nova_compute[117413]: 2025-10-08 16:50:32.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:50:33 compute-0 nova_compute[117413]: 2025-10-08 16:50:33.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:50:34 compute-0 nova_compute[117413]: 2025-10-08 16:50:34.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:36 compute-0 podman[159106]: 2025-10-08 16:50:36.450122237 +0000 UTC m=+0.055133994 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS)
Oct 08 16:50:36 compute-0 podman[159107]: 2025-10-08 16:50:36.498667281 +0000 UTC m=+0.095548945 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 08 16:50:37 compute-0 nova_compute[117413]: 2025-10-08 16:50:37.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:37 compute-0 systemd[1]: Stopping User Manager for UID 1000...
Oct 08 16:50:37 compute-0 systemd[155690]: Activating special unit Exit the Session...
Oct 08 16:50:37 compute-0 systemd[155690]: Stopped target Main User Target.
Oct 08 16:50:37 compute-0 systemd[155690]: Stopped target Basic System.
Oct 08 16:50:37 compute-0 systemd[155690]: Stopped target Paths.
Oct 08 16:50:37 compute-0 systemd[155690]: Stopped target Sockets.
Oct 08 16:50:37 compute-0 systemd[155690]: Stopped target Timers.
Oct 08 16:50:37 compute-0 systemd[155690]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 08 16:50:37 compute-0 systemd[155690]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 08 16:50:37 compute-0 systemd[155690]: Closed D-Bus User Message Bus Socket.
Oct 08 16:50:37 compute-0 systemd[155690]: Stopped Create User's Volatile Files and Directories.
Oct 08 16:50:37 compute-0 systemd[155690]: Removed slice User Application Slice.
Oct 08 16:50:37 compute-0 systemd[155690]: Reached target Shutdown.
Oct 08 16:50:37 compute-0 systemd[155690]: Finished Exit the Session.
Oct 08 16:50:37 compute-0 systemd[155690]: Reached target Exit the Session.
Oct 08 16:50:37 compute-0 systemd[1]: user@1000.service: Deactivated successfully.
Oct 08 16:50:37 compute-0 systemd[1]: Stopped User Manager for UID 1000.
Oct 08 16:50:37 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/1000...
Oct 08 16:50:37 compute-0 systemd[1]: run-user-1000.mount: Deactivated successfully.
Oct 08 16:50:37 compute-0 systemd[1]: user-runtime-dir@1000.service: Deactivated successfully.
Oct 08 16:50:37 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/1000.
Oct 08 16:50:37 compute-0 systemd[1]: Removed slice User Slice of UID 1000.
Oct 08 16:50:37 compute-0 systemd[1]: user-1000.slice: Consumed 1min 17.347s CPU time, 577.0M memory peak, read 173.0M from disk, written 16.5M to disk.
Oct 08 16:50:39 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 08 16:50:39 compute-0 nova_compute[117413]: 2025-10-08 16:50:39.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:39 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 08 16:50:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:50:41.951 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:50:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:50:41.952 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:50:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:50:41.952 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:50:42 compute-0 nova_compute[117413]: 2025-10-08 16:50:42.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:42 compute-0 podman[159152]: 2025-10-08 16:50:42.463041578 +0000 UTC m=+0.063466573 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:50:42 compute-0 podman[159176]: 2025-10-08 16:50:42.617123493 +0000 UTC m=+0.128629285 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.4)
Oct 08 16:50:44 compute-0 nova_compute[117413]: 2025-10-08 16:50:44.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:45 compute-0 nova_compute[117413]: 2025-10-08 16:50:45.357 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:50:47 compute-0 nova_compute[117413]: 2025-10-08 16:50:47.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:49 compute-0 nova_compute[117413]: 2025-10-08 16:50:49.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:52 compute-0 nova_compute[117413]: 2025-10-08 16:50:52.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:54 compute-0 nova_compute[117413]: 2025-10-08 16:50:54.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:55 compute-0 podman[159202]: 2025-10-08 16:50:55.466009719 +0000 UTC m=+0.069680432 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 08 16:50:57 compute-0 nova_compute[117413]: 2025-10-08 16:50:57.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:59 compute-0 nova_compute[117413]: 2025-10-08 16:50:59.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:50:59 compute-0 podman[127881]: time="2025-10-08T16:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:50:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:50:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3034 "" "Go-http-client/1.1"
Oct 08 16:51:00 compute-0 podman[159222]: 2025-10-08 16:51:00.487845201 +0000 UTC m=+0.084451927 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Oct 08 16:51:01 compute-0 openstack_network_exporter[130039]: ERROR   16:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:51:01 compute-0 openstack_network_exporter[130039]: ERROR   16:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:51:01 compute-0 openstack_network_exporter[130039]: ERROR   16:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:51:01 compute-0 openstack_network_exporter[130039]: ERROR   16:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:51:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:51:01 compute-0 openstack_network_exporter[130039]: ERROR   16:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:51:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:51:02 compute-0 nova_compute[117413]: 2025-10-08 16:51:02.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:04 compute-0 nova_compute[117413]: 2025-10-08 16:51:04.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:07 compute-0 nova_compute[117413]: 2025-10-08 16:51:07.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:07 compute-0 podman[159244]: 2025-10-08 16:51:07.483113889 +0000 UTC m=+0.074414818 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:51:07 compute-0 podman[159245]: 2025-10-08 16:51:07.483182421 +0000 UTC m=+0.073125051 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 16:51:09 compute-0 nova_compute[117413]: 2025-10-08 16:51:09.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:12 compute-0 nova_compute[117413]: 2025-10-08 16:51:12.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:13 compute-0 podman[159281]: 2025-10-08 16:51:13.454523388 +0000 UTC m=+0.060033985 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:51:13 compute-0 podman[159282]: 2025-10-08 16:51:13.495421483 +0000 UTC m=+0.094596958 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251007)
Oct 08 16:51:14 compute-0 nova_compute[117413]: 2025-10-08 16:51:14.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:17 compute-0 nova_compute[117413]: 2025-10-08 16:51:17.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:19 compute-0 nova_compute[117413]: 2025-10-08 16:51:19.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:22 compute-0 nova_compute[117413]: 2025-10-08 16:51:22.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:24 compute-0 nova_compute[117413]: 2025-10-08 16:51:24.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:25 compute-0 nova_compute[117413]: 2025-10-08 16:51:25.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:51:25 compute-0 nova_compute[117413]: 2025-10-08 16:51:25.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:51:25 compute-0 nova_compute[117413]: 2025-10-08 16:51:25.889 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:51:25 compute-0 nova_compute[117413]: 2025-10-08 16:51:25.889 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:51:25 compute-0 nova_compute[117413]: 2025-10-08 16:51:25.890 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:51:25 compute-0 nova_compute[117413]: 2025-10-08 16:51:25.890 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:51:26 compute-0 nova_compute[117413]: 2025-10-08 16:51:26.108 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:51:26 compute-0 nova_compute[117413]: 2025-10-08 16:51:26.111 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:51:26 compute-0 nova_compute[117413]: 2025-10-08 16:51:26.134 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.023s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:51:26 compute-0 nova_compute[117413]: 2025-10-08 16:51:26.135 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6130MB free_disk=73.24114990234375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:51:26 compute-0 nova_compute[117413]: 2025-10-08 16:51:26.135 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:51:26 compute-0 nova_compute[117413]: 2025-10-08 16:51:26.135 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:51:26 compute-0 podman[159329]: 2025-10-08 16:51:26.486980683 +0000 UTC m=+0.083901230 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest, config_id=multipathd)
Oct 08 16:51:27 compute-0 nova_compute[117413]: 2025-10-08 16:51:27.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:27 compute-0 nova_compute[117413]: 2025-10-08 16:51:27.236 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:51:27 compute-0 nova_compute[117413]: 2025-10-08 16:51:27.238 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:51:26 up 59 min,  0 user,  load average: 0.50, 0.43, 0.28\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:51:27 compute-0 nova_compute[117413]: 2025-10-08 16:51:27.272 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:51:27 compute-0 nova_compute[117413]: 2025-10-08 16:51:27.782 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:51:28 compute-0 nova_compute[117413]: 2025-10-08 16:51:28.292 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:51:28 compute-0 nova_compute[117413]: 2025-10-08 16:51:28.293 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.158s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:51:29 compute-0 nova_compute[117413]: 2025-10-08 16:51:29.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:29 compute-0 nova_compute[117413]: 2025-10-08 16:51:29.292 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:51:29 compute-0 nova_compute[117413]: 2025-10-08 16:51:29.293 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:51:29 compute-0 nova_compute[117413]: 2025-10-08 16:51:29.293 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:51:29 compute-0 podman[127881]: time="2025-10-08T16:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:51:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:51:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3038 "" "Go-http-client/1.1"
Oct 08 16:51:31 compute-0 nova_compute[117413]: 2025-10-08 16:51:31.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:51:31 compute-0 nova_compute[117413]: 2025-10-08 16:51:31.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:51:31 compute-0 openstack_network_exporter[130039]: ERROR   16:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:51:31 compute-0 openstack_network_exporter[130039]: ERROR   16:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:51:31 compute-0 openstack_network_exporter[130039]: ERROR   16:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:51:31 compute-0 openstack_network_exporter[130039]: ERROR   16:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:51:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:51:31 compute-0 openstack_network_exporter[130039]: ERROR   16:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:51:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:51:31 compute-0 podman[159349]: 2025-10-08 16:51:31.457681538 +0000 UTC m=+0.065533733 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350)
Oct 08 16:51:32 compute-0 nova_compute[117413]: 2025-10-08 16:51:32.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:34 compute-0 nova_compute[117413]: 2025-10-08 16:51:34.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:34 compute-0 nova_compute[117413]: 2025-10-08 16:51:34.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:51:34 compute-0 nova_compute[117413]: 2025-10-08 16:51:34.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:51:37 compute-0 nova_compute[117413]: 2025-10-08 16:51:37.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:38 compute-0 podman[159371]: 2025-10-08 16:51:38.454035116 +0000 UTC m=+0.054508686 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 08 16:51:38 compute-0 podman[159370]: 2025-10-08 16:51:38.464785785 +0000 UTC m=+0.065325817 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, container_name=iscsid, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image)
Oct 08 16:51:39 compute-0 nova_compute[117413]: 2025-10-08 16:51:39.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:51:41.953 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:51:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:51:41.953 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:51:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:51:41.953 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:51:42 compute-0 nova_compute[117413]: 2025-10-08 16:51:42.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:44 compute-0 nova_compute[117413]: 2025-10-08 16:51:44.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:44 compute-0 podman[159407]: 2025-10-08 16:51:44.446727176 +0000 UTC m=+0.055577687 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:51:44 compute-0 podman[159408]: 2025-10-08 16:51:44.494798386 +0000 UTC m=+0.103506143 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 08 16:51:47 compute-0 nova_compute[117413]: 2025-10-08 16:51:47.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:49 compute-0 nova_compute[117413]: 2025-10-08 16:51:49.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:52 compute-0 nova_compute[117413]: 2025-10-08 16:51:52.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:54 compute-0 nova_compute[117413]: 2025-10-08 16:51:54.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:57 compute-0 nova_compute[117413]: 2025-10-08 16:51:57.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:57 compute-0 podman[159457]: 2025-10-08 16:51:57.382419503 +0000 UTC m=+0.074572122 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007)
Oct 08 16:51:59 compute-0 nova_compute[117413]: 2025-10-08 16:51:59.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:51:59 compute-0 podman[127881]: time="2025-10-08T16:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:51:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:51:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3038 "" "Go-http-client/1.1"
Oct 08 16:52:01 compute-0 openstack_network_exporter[130039]: ERROR   16:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:52:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:52:01 compute-0 openstack_network_exporter[130039]: ERROR   16:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:52:01 compute-0 openstack_network_exporter[130039]: ERROR   16:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:52:01 compute-0 openstack_network_exporter[130039]: ERROR   16:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:52:01 compute-0 openstack_network_exporter[130039]: ERROR   16:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:52:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:52:02 compute-0 nova_compute[117413]: 2025-10-08 16:52:02.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:02 compute-0 podman[159477]: 2025-10-08 16:52:02.455987982 +0000 UTC m=+0.055849465 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, managed_by=edpm_ansible, distribution-scope=public, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 08 16:52:04 compute-0 nova_compute[117413]: 2025-10-08 16:52:04.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:07 compute-0 nova_compute[117413]: 2025-10-08 16:52:07.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:08 compute-0 sshd-session[159499]: error: kex_exchange_identification: read: Connection reset by peer
Oct 08 16:52:08 compute-0 sshd-session[159499]: Connection reset by 45.140.17.97 port 27206
Oct 08 16:52:09 compute-0 nova_compute[117413]: 2025-10-08 16:52:09.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:09 compute-0 podman[159501]: 2025-10-08 16:52:09.477671116 +0000 UTC m=+0.069429314 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 08 16:52:09 compute-0 podman[159500]: 2025-10-08 16:52:09.492409209 +0000 UTC m=+0.082791308 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS)
Oct 08 16:52:12 compute-0 nova_compute[117413]: 2025-10-08 16:52:12.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:14 compute-0 nova_compute[117413]: 2025-10-08 16:52:14.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:15 compute-0 podman[159539]: 2025-10-08 16:52:15.481420803 +0000 UTC m=+0.079993848 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:52:15 compute-0 podman[159540]: 2025-10-08 16:52:15.522567785 +0000 UTC m=+0.124040213 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Oct 08 16:52:17 compute-0 nova_compute[117413]: 2025-10-08 16:52:17.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:19 compute-0 nova_compute[117413]: 2025-10-08 16:52:19.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:22 compute-0 nova_compute[117413]: 2025-10-08 16:52:22.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:24 compute-0 nova_compute[117413]: 2025-10-08 16:52:24.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:26 compute-0 nova_compute[117413]: 2025-10-08 16:52:26.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:52:26 compute-0 nova_compute[117413]: 2025-10-08 16:52:26.882 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:52:26 compute-0 nova_compute[117413]: 2025-10-08 16:52:26.883 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:52:26 compute-0 nova_compute[117413]: 2025-10-08 16:52:26.884 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:52:26 compute-0 nova_compute[117413]: 2025-10-08 16:52:26.884 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:52:27 compute-0 nova_compute[117413]: 2025-10-08 16:52:27.095 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:52:27 compute-0 nova_compute[117413]: 2025-10-08 16:52:27.097 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:52:27 compute-0 nova_compute[117413]: 2025-10-08 16:52:27.129 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.033s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:52:27 compute-0 nova_compute[117413]: 2025-10-08 16:52:27.131 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6135MB free_disk=73.24127578735352GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:52:27 compute-0 nova_compute[117413]: 2025-10-08 16:52:27.131 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:52:27 compute-0 nova_compute[117413]: 2025-10-08 16:52:27.132 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:52:27 compute-0 nova_compute[117413]: 2025-10-08 16:52:27.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:28 compute-0 nova_compute[117413]: 2025-10-08 16:52:28.181 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:52:28 compute-0 nova_compute[117413]: 2025-10-08 16:52:28.181 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:52:27 up  1:00,  0 user,  load average: 0.25, 0.36, 0.27\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:52:28 compute-0 nova_compute[117413]: 2025-10-08 16:52:28.204 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:52:28 compute-0 podman[159592]: 2025-10-08 16:52:28.458302096 +0000 UTC m=+0.065266532 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=watcher_latest, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 08 16:52:28 compute-0 nova_compute[117413]: 2025-10-08 16:52:28.713 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:52:29 compute-0 nova_compute[117413]: 2025-10-08 16:52:29.222 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:52:29 compute-0 nova_compute[117413]: 2025-10-08 16:52:29.223 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.091s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:52:29 compute-0 nova_compute[117413]: 2025-10-08 16:52:29.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:29 compute-0 podman[127881]: time="2025-10-08T16:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:52:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:52:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3031 "" "Go-http-client/1.1"
Oct 08 16:52:30 compute-0 nova_compute[117413]: 2025-10-08 16:52:30.219 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:52:30 compute-0 nova_compute[117413]: 2025-10-08 16:52:30.220 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:52:30 compute-0 nova_compute[117413]: 2025-10-08 16:52:30.221 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:52:30 compute-0 nova_compute[117413]: 2025-10-08 16:52:30.370 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:52:31 compute-0 nova_compute[117413]: 2025-10-08 16:52:31.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:52:31 compute-0 nova_compute[117413]: 2025-10-08 16:52:31.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:52:31 compute-0 openstack_network_exporter[130039]: ERROR   16:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:52:31 compute-0 openstack_network_exporter[130039]: ERROR   16:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:52:31 compute-0 openstack_network_exporter[130039]: ERROR   16:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:52:31 compute-0 openstack_network_exporter[130039]: ERROR   16:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:52:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:52:31 compute-0 openstack_network_exporter[130039]: ERROR   16:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:52:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:52:32 compute-0 nova_compute[117413]: 2025-10-08 16:52:32.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:33 compute-0 podman[159612]: 2025-10-08 16:52:33.443241125 +0000 UTC m=+0.053845135 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, container_name=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-type=git, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 08 16:52:34 compute-0 nova_compute[117413]: 2025-10-08 16:52:34.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:34 compute-0 nova_compute[117413]: 2025-10-08 16:52:34.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:52:35 compute-0 nova_compute[117413]: 2025-10-08 16:52:35.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:52:37 compute-0 nova_compute[117413]: 2025-10-08 16:52:37.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:39 compute-0 nova_compute[117413]: 2025-10-08 16:52:39.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:40 compute-0 podman[159634]: 2025-10-08 16:52:40.480637399 +0000 UTC m=+0.071676685 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Oct 08 16:52:40 compute-0 podman[159635]: 2025-10-08 16:52:40.493366414 +0000 UTC m=+0.087742935 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 16:52:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:52:41.954 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:52:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:52:41.955 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:52:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:52:41.955 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:52:42 compute-0 nova_compute[117413]: 2025-10-08 16:52:42.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:44 compute-0 nova_compute[117413]: 2025-10-08 16:52:44.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:46 compute-0 podman[159674]: 2025-10-08 16:52:46.474311605 +0000 UTC m=+0.071631284 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:52:46 compute-0 podman[159675]: 2025-10-08 16:52:46.545833865 +0000 UTC m=+0.131982034 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4)
Oct 08 16:52:47 compute-0 nova_compute[117413]: 2025-10-08 16:52:47.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:47 compute-0 nova_compute[117413]: 2025-10-08 16:52:47.357 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:52:49 compute-0 nova_compute[117413]: 2025-10-08 16:52:49.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:52 compute-0 nova_compute[117413]: 2025-10-08 16:52:52.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:54 compute-0 nova_compute[117413]: 2025-10-08 16:52:54.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:57 compute-0 nova_compute[117413]: 2025-10-08 16:52:57.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:59 compute-0 nova_compute[117413]: 2025-10-08 16:52:59.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:52:59 compute-0 podman[159722]: 2025-10-08 16:52:59.486829552 +0000 UTC m=+0.084919396 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251007, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 08 16:52:59 compute-0 podman[127881]: time="2025-10-08T16:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:52:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:52:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3037 "" "Go-http-client/1.1"
Oct 08 16:53:01 compute-0 openstack_network_exporter[130039]: ERROR   16:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:53:01 compute-0 openstack_network_exporter[130039]: ERROR   16:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:53:01 compute-0 openstack_network_exporter[130039]: ERROR   16:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:53:01 compute-0 openstack_network_exporter[130039]: ERROR   16:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:53:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:53:01 compute-0 openstack_network_exporter[130039]: ERROR   16:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:53:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:53:02 compute-0 nova_compute[117413]: 2025-10-08 16:53:02.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:04 compute-0 nova_compute[117413]: 2025-10-08 16:53:04.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:04 compute-0 podman[159742]: 2025-10-08 16:53:04.467495667 +0000 UTC m=+0.075602058 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7)
Oct 08 16:53:07 compute-0 nova_compute[117413]: 2025-10-08 16:53:07.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:09 compute-0 nova_compute[117413]: 2025-10-08 16:53:09.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:11 compute-0 podman[159764]: 2025-10-08 16:53:11.458500851 +0000 UTC m=+0.060012621 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 08 16:53:11 compute-0 podman[159763]: 2025-10-08 16:53:11.477940519 +0000 UTC m=+0.085448311 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, managed_by=edpm_ansible)
Oct 08 16:53:12 compute-0 nova_compute[117413]: 2025-10-08 16:53:12.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:14 compute-0 nova_compute[117413]: 2025-10-08 16:53:14.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:17 compute-0 nova_compute[117413]: 2025-10-08 16:53:17.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:17 compute-0 podman[159803]: 2025-10-08 16:53:17.480164148 +0000 UTC m=+0.073175729 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:53:17 compute-0 podman[159804]: 2025-10-08 16:53:17.514631496 +0000 UTC m=+0.111645692 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller)
Oct 08 16:53:19 compute-0 nova_compute[117413]: 2025-10-08 16:53:19.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:22 compute-0 nova_compute[117413]: 2025-10-08 16:53:22.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:24 compute-0 nova_compute[117413]: 2025-10-08 16:53:24.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:27 compute-0 nova_compute[117413]: 2025-10-08 16:53:27.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:28 compute-0 nova_compute[117413]: 2025-10-08 16:53:28.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:53:28 compute-0 nova_compute[117413]: 2025-10-08 16:53:28.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:53:28 compute-0 nova_compute[117413]: 2025-10-08 16:53:28.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:53:28 compute-0 nova_compute[117413]: 2025-10-08 16:53:28.893 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:53:28 compute-0 nova_compute[117413]: 2025-10-08 16:53:28.893 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:53:28 compute-0 nova_compute[117413]: 2025-10-08 16:53:28.894 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:53:28 compute-0 nova_compute[117413]: 2025-10-08 16:53:28.894 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:53:29 compute-0 nova_compute[117413]: 2025-10-08 16:53:29.111 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:53:29 compute-0 nova_compute[117413]: 2025-10-08 16:53:29.113 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:53:29 compute-0 nova_compute[117413]: 2025-10-08 16:53:29.150 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.037s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:53:29 compute-0 nova_compute[117413]: 2025-10-08 16:53:29.152 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6139MB free_disk=73.24147415161133GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:53:29 compute-0 nova_compute[117413]: 2025-10-08 16:53:29.152 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:53:29 compute-0 nova_compute[117413]: 2025-10-08 16:53:29.153 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:53:29 compute-0 nova_compute[117413]: 2025-10-08 16:53:29.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:29 compute-0 podman[127881]: time="2025-10-08T16:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:53:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:53:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3035 "" "Go-http-client/1.1"
Oct 08 16:53:30 compute-0 nova_compute[117413]: 2025-10-08 16:53:30.256 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:53:30 compute-0 nova_compute[117413]: 2025-10-08 16:53:30.256 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:53:29 up  1:01,  0 user,  load average: 0.16, 0.31, 0.25\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:53:30 compute-0 nova_compute[117413]: 2025-10-08 16:53:30.288 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:53:30 compute-0 podman[159854]: 2025-10-08 16:53:30.469017097 +0000 UTC m=+0.063537432 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, org.label-schema.build-date=20251007)
Oct 08 16:53:30 compute-0 nova_compute[117413]: 2025-10-08 16:53:30.798 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:53:31 compute-0 nova_compute[117413]: 2025-10-08 16:53:31.307 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:53:31 compute-0 nova_compute[117413]: 2025-10-08 16:53:31.308 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.155s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:53:31 compute-0 openstack_network_exporter[130039]: ERROR   16:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:53:31 compute-0 openstack_network_exporter[130039]: ERROR   16:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:53:31 compute-0 openstack_network_exporter[130039]: ERROR   16:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:53:31 compute-0 openstack_network_exporter[130039]: ERROR   16:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:53:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:53:31 compute-0 openstack_network_exporter[130039]: ERROR   16:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:53:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:53:32 compute-0 nova_compute[117413]: 2025-10-08 16:53:32.304 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:53:32 compute-0 nova_compute[117413]: 2025-10-08 16:53:32.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:32 compute-0 nova_compute[117413]: 2025-10-08 16:53:32.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:53:32 compute-0 nova_compute[117413]: 2025-10-08 16:53:32.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:53:32 compute-0 nova_compute[117413]: 2025-10-08 16:53:32.363 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:53:34 compute-0 nova_compute[117413]: 2025-10-08 16:53:34.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:35 compute-0 nova_compute[117413]: 2025-10-08 16:53:35.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:53:35 compute-0 podman[159875]: 2025-10-08 16:53:35.477928193 +0000 UTC m=+0.066253081 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9-minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Oct 08 16:53:36 compute-0 nova_compute[117413]: 2025-10-08 16:53:36.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:53:37 compute-0 nova_compute[117413]: 2025-10-08 16:53:37.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:39 compute-0 nova_compute[117413]: 2025-10-08 16:53:39.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:53:41.956 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:53:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:53:41.957 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:53:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:53:41.957 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:53:42 compute-0 nova_compute[117413]: 2025-10-08 16:53:42.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:42 compute-0 podman[159899]: 2025-10-08 16:53:42.486367616 +0000 UTC m=+0.080434486 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, tcib_managed=true, io.buildah.version=1.41.4)
Oct 08 16:53:42 compute-0 podman[159900]: 2025-10-08 16:53:42.496329042 +0000 UTC m=+0.084804542 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:53:44 compute-0 nova_compute[117413]: 2025-10-08 16:53:44.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:47 compute-0 nova_compute[117413]: 2025-10-08 16:53:47.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:48 compute-0 podman[159939]: 2025-10-08 16:53:48.492697165 +0000 UTC m=+0.088383834 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 08 16:53:48 compute-0 podman[159940]: 2025-10-08 16:53:48.545443917 +0000 UTC m=+0.137804280 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Oct 08 16:53:49 compute-0 nova_compute[117413]: 2025-10-08 16:53:49.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:52 compute-0 nova_compute[117413]: 2025-10-08 16:53:52.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:54 compute-0 nova_compute[117413]: 2025-10-08 16:53:54.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:57 compute-0 nova_compute[117413]: 2025-10-08 16:53:57.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:59 compute-0 nova_compute[117413]: 2025-10-08 16:53:59.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:53:59 compute-0 podman[127881]: time="2025-10-08T16:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:53:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:53:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3037 "" "Go-http-client/1.1"
Oct 08 16:54:01 compute-0 openstack_network_exporter[130039]: ERROR   16:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:54:01 compute-0 openstack_network_exporter[130039]: ERROR   16:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:54:01 compute-0 openstack_network_exporter[130039]: ERROR   16:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:54:01 compute-0 openstack_network_exporter[130039]: ERROR   16:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:54:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:54:01 compute-0 openstack_network_exporter[130039]: ERROR   16:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:54:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:54:01 compute-0 podman[159984]: 2025-10-08 16:54:01.506954752 +0000 UTC m=+0.103772916 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=multipathd, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 08 16:54:02 compute-0 nova_compute[117413]: 2025-10-08 16:54:02.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:04 compute-0 nova_compute[117413]: 2025-10-08 16:54:04.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:06 compute-0 podman[160005]: 2025-10-08 16:54:06.469787486 +0000 UTC m=+0.080434017 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public)
Oct 08 16:54:07 compute-0 nova_compute[117413]: 2025-10-08 16:54:07.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:09 compute-0 nova_compute[117413]: 2025-10-08 16:54:09.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:12 compute-0 nova_compute[117413]: 2025-10-08 16:54:12.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:13 compute-0 podman[160027]: 2025-10-08 16:54:13.492737087 +0000 UTC m=+0.086244684 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251007, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_id=iscsid)
Oct 08 16:54:13 compute-0 podman[160028]: 2025-10-08 16:54:13.492957863 +0000 UTC m=+0.080698724 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 08 16:54:14 compute-0 nova_compute[117413]: 2025-10-08 16:54:14.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:17 compute-0 nova_compute[117413]: 2025-10-08 16:54:17.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:19 compute-0 nova_compute[117413]: 2025-10-08 16:54:19.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:19 compute-0 podman[160064]: 2025-10-08 16:54:19.44942796 +0000 UTC m=+0.059879637 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:54:19 compute-0 podman[160065]: 2025-10-08 16:54:19.511569742 +0000 UTC m=+0.120206787 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 08 16:54:20 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:54:20.921 28633 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:59:d4', 'max_tunid': '16711680', 'northd_internal_version': '24.09.4-20.37.0-77.8', 'svc_monitor_mac': '2a:96:71:14:32:2b'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/event.py:55
Oct 08 16:54:20 compute-0 nova_compute[117413]: 2025-10-08 16:54:20.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:20 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:54:20.922 28633 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.12/site-packages/neutron/agent/ovn/metadata/agent.py:367
Oct 08 16:54:21 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:54:21.924 28633 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f72d8dca-98f2-44ea-b875-cd9a8b583db6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.12/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 08 16:54:22 compute-0 nova_compute[117413]: 2025-10-08 16:54:22.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:24 compute-0 nova_compute[117413]: 2025-10-08 16:54:24.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:27 compute-0 nova_compute[117413]: 2025-10-08 16:54:27.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:28 compute-0 nova_compute[117413]: 2025-10-08 16:54:28.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:54:28 compute-0 nova_compute[117413]: 2025-10-08 16:54:28.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:54:28 compute-0 nova_compute[117413]: 2025-10-08 16:54:28.906 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:54:28 compute-0 nova_compute[117413]: 2025-10-08 16:54:28.908 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:54:28 compute-0 nova_compute[117413]: 2025-10-08 16:54:28.909 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:54:28 compute-0 nova_compute[117413]: 2025-10-08 16:54:28.909 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:54:29 compute-0 nova_compute[117413]: 2025-10-08 16:54:29.122 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:54:29 compute-0 nova_compute[117413]: 2025-10-08 16:54:29.123 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:54:29 compute-0 nova_compute[117413]: 2025-10-08 16:54:29.164 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.041s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:54:29 compute-0 nova_compute[117413]: 2025-10-08 16:54:29.165 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6131MB free_disk=73.24149322509766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:54:29 compute-0 nova_compute[117413]: 2025-10-08 16:54:29.165 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:54:29 compute-0 nova_compute[117413]: 2025-10-08 16:54:29.165 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:54:29 compute-0 nova_compute[117413]: 2025-10-08 16:54:29.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:29 compute-0 podman[127881]: time="2025-10-08T16:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:54:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:54:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3035 "" "Go-http-client/1.1"
Oct 08 16:54:30 compute-0 nova_compute[117413]: 2025-10-08 16:54:30.265 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:54:30 compute-0 nova_compute[117413]: 2025-10-08 16:54:30.266 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:54:29 up  1:02,  0 user,  load average: 0.15, 0.28, 0.25\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:54:30 compute-0 nova_compute[117413]: 2025-10-08 16:54:30.288 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:54:30 compute-0 nova_compute[117413]: 2025-10-08 16:54:30.808 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:54:31 compute-0 nova_compute[117413]: 2025-10-08 16:54:31.318 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:54:31 compute-0 nova_compute[117413]: 2025-10-08 16:54:31.319 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.153s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:54:31 compute-0 openstack_network_exporter[130039]: ERROR   16:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:54:31 compute-0 openstack_network_exporter[130039]: ERROR   16:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:54:31 compute-0 openstack_network_exporter[130039]: ERROR   16:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:54:31 compute-0 openstack_network_exporter[130039]: ERROR   16:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:54:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:54:31 compute-0 openstack_network_exporter[130039]: ERROR   16:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:54:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:54:32 compute-0 nova_compute[117413]: 2025-10-08 16:54:32.314 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:54:32 compute-0 nova_compute[117413]: 2025-10-08 16:54:32.314 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:54:32 compute-0 nova_compute[117413]: 2025-10-08 16:54:32.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:32 compute-0 podman[160113]: 2025-10-08 16:54:32.483062802 +0000 UTC m=+0.079827769 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2)
Oct 08 16:54:33 compute-0 nova_compute[117413]: 2025-10-08 16:54:33.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:54:33 compute-0 nova_compute[117413]: 2025-10-08 16:54:33.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:54:34 compute-0 nova_compute[117413]: 2025-10-08 16:54:34.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:34 compute-0 nova_compute[117413]: 2025-10-08 16:54:34.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:54:36 compute-0 nova_compute[117413]: 2025-10-08 16:54:36.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:54:37 compute-0 nova_compute[117413]: 2025-10-08 16:54:37.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:37 compute-0 podman[160133]: 2025-10-08 16:54:37.467743634 +0000 UTC m=+0.072571461 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container)
Oct 08 16:54:38 compute-0 nova_compute[117413]: 2025-10-08 16:54:38.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:54:39 compute-0 nova_compute[117413]: 2025-10-08 16:54:39.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:54:41.959 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:54:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:54:41.960 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:54:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:54:41.960 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:54:42 compute-0 unix_chkpwd[160157]: password check failed for user (root)
Oct 08 16:54:42 compute-0 sshd-session[160154]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 08 16:54:42 compute-0 nova_compute[117413]: 2025-10-08 16:54:42.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:43 compute-0 sshd-session[160154]: Failed password for root from 193.46.255.103 port 55324 ssh2
Oct 08 16:54:44 compute-0 unix_chkpwd[160158]: password check failed for user (root)
Oct 08 16:54:44 compute-0 nova_compute[117413]: 2025-10-08 16:54:44.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:44 compute-0 podman[160160]: 2025-10-08 16:54:44.476273061 +0000 UTC m=+0.073111027 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.build-date=20251007, tcib_build_tag=watcher_latest)
Oct 08 16:54:44 compute-0 podman[160159]: 2025-10-08 16:54:44.482972983 +0000 UTC m=+0.083685000 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251007)
Oct 08 16:54:45 compute-0 sshd-session[160154]: Failed password for root from 193.46.255.103 port 55324 ssh2
Oct 08 16:54:46 compute-0 unix_chkpwd[160198]: password check failed for user (root)
Oct 08 16:54:47 compute-0 nova_compute[117413]: 2025-10-08 16:54:47.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:48 compute-0 sshd-session[160154]: Failed password for root from 193.46.255.103 port 55324 ssh2
Oct 08 16:54:49 compute-0 nova_compute[117413]: 2025-10-08 16:54:49.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:49 compute-0 sshd-session[160154]: Received disconnect from 193.46.255.103 port 55324:11:  [preauth]
Oct 08 16:54:49 compute-0 sshd-session[160154]: Disconnected from authenticating user root 193.46.255.103 port 55324 [preauth]
Oct 08 16:54:49 compute-0 sshd-session[160154]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 08 16:54:50 compute-0 nova_compute[117413]: 2025-10-08 16:54:50.358 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:54:50 compute-0 podman[160201]: 2025-10-08 16:54:50.473761415 +0000 UTC m=+0.071234943 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 08 16:54:50 compute-0 podman[160202]: 2025-10-08 16:54:50.51511682 +0000 UTC m=+0.114783871 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 08 16:54:50 compute-0 unix_chkpwd[160246]: password check failed for user (root)
Oct 08 16:54:50 compute-0 sshd-session[160199]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 08 16:54:52 compute-0 nova_compute[117413]: 2025-10-08 16:54:52.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:52 compute-0 sshd-session[160199]: Failed password for root from 193.46.255.103 port 52578 ssh2
Oct 08 16:54:53 compute-0 nova_compute[117413]: 2025-10-08 16:54:53.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:54:54 compute-0 nova_compute[117413]: 2025-10-08 16:54:54.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:54 compute-0 unix_chkpwd[160248]: password check failed for user (root)
Oct 08 16:54:54 compute-0 nova_compute[117413]: 2025-10-08 16:54:54.922 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:54:54 compute-0 nova_compute[117413]: 2025-10-08 16:54:54.923 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.12/site-packages/nova/compute/manager.py:11947
Oct 08 16:54:56 compute-0 sshd-session[160199]: Failed password for root from 193.46.255.103 port 52578 ssh2
Oct 08 16:54:56 compute-0 unix_chkpwd[160249]: password check failed for user (root)
Oct 08 16:54:57 compute-0 nova_compute[117413]: 2025-10-08 16:54:57.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:58 compute-0 sshd-session[160199]: Failed password for root from 193.46.255.103 port 52578 ssh2
Oct 08 16:54:58 compute-0 sshd-session[160199]: Received disconnect from 193.46.255.103 port 52578:11:  [preauth]
Oct 08 16:54:58 compute-0 sshd-session[160199]: Disconnected from authenticating user root 193.46.255.103 port 52578 [preauth]
Oct 08 16:54:58 compute-0 sshd-session[160199]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 08 16:54:59 compute-0 nova_compute[117413]: 2025-10-08 16:54:59.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:54:59 compute-0 unix_chkpwd[160252]: password check failed for user (root)
Oct 08 16:54:59 compute-0 sshd-session[160250]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 08 16:54:59 compute-0 podman[127881]: time="2025-10-08T16:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:54:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:54:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3035 "" "Go-http-client/1.1"
Oct 08 16:55:01 compute-0 openstack_network_exporter[130039]: ERROR   16:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:55:01 compute-0 openstack_network_exporter[130039]: ERROR   16:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:55:01 compute-0 openstack_network_exporter[130039]: ERROR   16:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:55:01 compute-0 openstack_network_exporter[130039]: ERROR   16:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:55:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:55:01 compute-0 openstack_network_exporter[130039]: ERROR   16:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:55:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:55:01 compute-0 sshd-session[160250]: Failed password for root from 193.46.255.103 port 15878 ssh2
Oct 08 16:55:02 compute-0 nova_compute[117413]: 2025-10-08 16:55:02.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:02 compute-0 nova_compute[117413]: 2025-10-08 16:55:02.555 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:55:02 compute-0 nova_compute[117413]: 2025-10-08 16:55:02.555 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11909
Oct 08 16:55:03 compute-0 nova_compute[117413]: 2025-10-08 16:55:03.070 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11918
Oct 08 16:55:03 compute-0 unix_chkpwd[160253]: password check failed for user (root)
Oct 08 16:55:03 compute-0 podman[160254]: 2025-10-08 16:55:03.47922311 +0000 UTC m=+0.077683648 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251007, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest)
Oct 08 16:55:04 compute-0 nova_compute[117413]: 2025-10-08 16:55:04.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:04 compute-0 sshd-session[160250]: Failed password for root from 193.46.255.103 port 15878 ssh2
Oct 08 16:55:05 compute-0 unix_chkpwd[160274]: password check failed for user (root)
Oct 08 16:55:07 compute-0 nova_compute[117413]: 2025-10-08 16:55:07.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:07 compute-0 sshd-session[160250]: Failed password for root from 193.46.255.103 port 15878 ssh2
Oct 08 16:55:08 compute-0 podman[160275]: 2025-10-08 16:55:08.500121118 +0000 UTC m=+0.094726456 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9)
Oct 08 16:55:09 compute-0 sshd-session[160250]: Received disconnect from 193.46.255.103 port 15878:11:  [preauth]
Oct 08 16:55:09 compute-0 sshd-session[160250]: Disconnected from authenticating user root 193.46.255.103 port 15878 [preauth]
Oct 08 16:55:09 compute-0 sshd-session[160250]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.103  user=root
Oct 08 16:55:09 compute-0 nova_compute[117413]: 2025-10-08 16:55:09.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:12 compute-0 nova_compute[117413]: 2025-10-08 16:55:12.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:14 compute-0 nova_compute[117413]: 2025-10-08 16:55:14.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:15 compute-0 podman[160298]: 2025-10-08 16:55:15.461153663 +0000 UTC m=+0.061674189 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251007)
Oct 08 16:55:15 compute-0 podman[160297]: 2025-10-08 16:55:15.469271066 +0000 UTC m=+0.073554730 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=iscsid, org.label-schema.build-date=20251007, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 08 16:55:17 compute-0 nova_compute[117413]: 2025-10-08 16:55:17.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:19 compute-0 nova_compute[117413]: 2025-10-08 16:55:19.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:21 compute-0 podman[160336]: 2025-10-08 16:55:21.481692219 +0000 UTC m=+0.079394127 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 08 16:55:21 compute-0 podman[160337]: 2025-10-08 16:55:21.508504626 +0000 UTC m=+0.105274898 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 08 16:55:22 compute-0 nova_compute[117413]: 2025-10-08 16:55:22.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:24 compute-0 nova_compute[117413]: 2025-10-08 16:55:24.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:27 compute-0 nova_compute[117413]: 2025-10-08 16:55:27.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:28 compute-0 nova_compute[117413]: 2025-10-08 16:55:28.878 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:55:29 compute-0 nova_compute[117413]: 2025-10-08 16:55:29.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:55:29 compute-0 nova_compute[117413]: 2025-10-08 16:55:29.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:29 compute-0 podman[127881]: time="2025-10-08T16:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:55:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:55:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3038 "" "Go-http-client/1.1"
Oct 08 16:55:29 compute-0 nova_compute[117413]: 2025-10-08 16:55:29.987 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:55:29 compute-0 nova_compute[117413]: 2025-10-08 16:55:29.988 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:55:29 compute-0 nova_compute[117413]: 2025-10-08 16:55:29.988 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:55:29 compute-0 nova_compute[117413]: 2025-10-08 16:55:29.988 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:55:30 compute-0 nova_compute[117413]: 2025-10-08 16:55:30.209 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:55:30 compute-0 nova_compute[117413]: 2025-10-08 16:55:30.210 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:55:30 compute-0 nova_compute[117413]: 2025-10-08 16:55:30.234 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.024s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:55:30 compute-0 nova_compute[117413]: 2025-10-08 16:55:30.235 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6129MB free_disk=73.24147415161133GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:55:30 compute-0 nova_compute[117413]: 2025-10-08 16:55:30.235 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:55:30 compute-0 nova_compute[117413]: 2025-10-08 16:55:30.236 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:55:31 compute-0 nova_compute[117413]: 2025-10-08 16:55:31.413 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:55:31 compute-0 nova_compute[117413]: 2025-10-08 16:55:31.413 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:55:30 up  1:03,  0 user,  load average: 0.05, 0.23, 0.23\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:55:31 compute-0 openstack_network_exporter[130039]: ERROR   16:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:55:31 compute-0 openstack_network_exporter[130039]: ERROR   16:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:55:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:55:31 compute-0 openstack_network_exporter[130039]: ERROR   16:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:55:31 compute-0 openstack_network_exporter[130039]: ERROR   16:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:55:31 compute-0 openstack_network_exporter[130039]: ERROR   16:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:55:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:55:31 compute-0 nova_compute[117413]: 2025-10-08 16:55:31.506 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing inventories for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:822
Oct 08 16:55:31 compute-0 nova_compute[117413]: 2025-10-08 16:55:31.591 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating ProviderTree inventory for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:786
Oct 08 16:55:31 compute-0 nova_compute[117413]: 2025-10-08 16:55:31.591 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Updating inventory in ProviderTree for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:176
Oct 08 16:55:31 compute-0 nova_compute[117413]: 2025-10-08 16:55:31.604 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing aggregate associations for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8, aggregates: None _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:831
Oct 08 16:55:31 compute-0 nova_compute[117413]: 2025-10-08 16:55:31.629 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Refreshing trait associations for resource provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8, traits: HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIRTIO_PACKED,COMPUTE_SECURITY_STATELESS_FIRMWARE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI2,COMPUTE_SOUND_MODEL_ICH9,COMPUTE_SOUND_MODEL_SB16,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_ARCH_X86_64,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SOUND_MODEL_PCSPK,COMPUTE_SOUND_MODEL_ES1370,HW_ARCH_X86_64,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_VIRTIO_FS,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AESNI,COMPUTE_SECURITY_TPM_TIS,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_BMI,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SVM,COMPUTE_ADDRESS_SPACE_PASSTHROUGH,HW_CPU_X86_MMX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_IGB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOUND_MODEL_USB,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_SOUND_MODEL_AC97,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_ADDRESS_SPACE_EMULATED,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOUND_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_CRB,HW_CPU_X86_SSE42,COMPUTE_SOUND_MODEL_ICH6,COMPUTE_STORAGE_BUS_IDE _refresh_associations /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:843
Oct 08 16:55:31 compute-0 nova_compute[117413]: 2025-10-08 16:55:31.653 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:55:32 compute-0 nova_compute[117413]: 2025-10-08 16:55:32.177 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:55:32 compute-0 nova_compute[117413]: 2025-10-08 16:55:32.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:32 compute-0 nova_compute[117413]: 2025-10-08 16:55:32.717 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:55:32 compute-0 nova_compute[117413]: 2025-10-08 16:55:32.718 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.482s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:55:33 compute-0 nova_compute[117413]: 2025-10-08 16:55:33.718 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:55:33 compute-0 nova_compute[117413]: 2025-10-08 16:55:33.719 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:55:33 compute-0 nova_compute[117413]: 2025-10-08 16:55:33.719 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:55:34 compute-0 nova_compute[117413]: 2025-10-08 16:55:34.307 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:55:34 compute-0 nova_compute[117413]: 2025-10-08 16:55:34.308 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:55:34 compute-0 nova_compute[117413]: 2025-10-08 16:55:34.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:34 compute-0 podman[160390]: 2025-10-08 16:55:34.492033373 +0000 UTC m=+0.085540723 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 08 16:55:35 compute-0 nova_compute[117413]: 2025-10-08 16:55:35.951 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:55:37 compute-0 nova_compute[117413]: 2025-10-08 16:55:37.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:55:37 compute-0 nova_compute[117413]: 2025-10-08 16:55:37.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:38 compute-0 nova_compute[117413]: 2025-10-08 16:55:38.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:55:39 compute-0 nova_compute[117413]: 2025-10-08 16:55:39.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:39 compute-0 podman[160411]: 2025-10-08 16:55:39.489590342 +0000 UTC m=+0.091235396 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 08 16:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:55:41.961 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:55:41.962 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:55:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:55:41.962 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:55:42 compute-0 nova_compute[117413]: 2025-10-08 16:55:42.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:44 compute-0 nova_compute[117413]: 2025-10-08 16:55:44.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:46 compute-0 podman[160435]: 2025-10-08 16:55:46.490414468 +0000 UTC m=+0.081709894 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 08 16:55:46 compute-0 podman[160434]: 2025-10-08 16:55:46.502303839 +0000 UTC m=+0.098674130 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid)
Oct 08 16:55:47 compute-0 nova_compute[117413]: 2025-10-08 16:55:47.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:49 compute-0 nova_compute[117413]: 2025-10-08 16:55:49.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:52 compute-0 podman[160469]: 2025-10-08 16:55:52.461256168 +0000 UTC m=+0.068254258 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:55:52 compute-0 nova_compute[117413]: 2025-10-08 16:55:52.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:52 compute-0 podman[160470]: 2025-10-08 16:55:52.520923988 +0000 UTC m=+0.113814143 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 08 16:55:54 compute-0 nova_compute[117413]: 2025-10-08 16:55:54.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:57 compute-0 nova_compute[117413]: 2025-10-08 16:55:57.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:59 compute-0 nova_compute[117413]: 2025-10-08 16:55:59.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:55:59 compute-0 podman[127881]: time="2025-10-08T16:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:55:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:55:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3037 "" "Go-http-client/1.1"
Oct 08 16:56:01 compute-0 openstack_network_exporter[130039]: ERROR   16:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:56:01 compute-0 openstack_network_exporter[130039]: ERROR   16:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:56:01 compute-0 openstack_network_exporter[130039]: ERROR   16:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:56:01 compute-0 openstack_network_exporter[130039]: ERROR   16:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:56:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:56:01 compute-0 openstack_network_exporter[130039]: ERROR   16:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:56:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:56:02 compute-0 nova_compute[117413]: 2025-10-08 16:56:02.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:04 compute-0 nova_compute[117413]: 2025-10-08 16:56:04.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:05 compute-0 podman[160520]: 2025-10-08 16:56:05.500962394 +0000 UTC m=+0.085533063 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=watcher_latest)
Oct 08 16:56:07 compute-0 nova_compute[117413]: 2025-10-08 16:56:07.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:09 compute-0 nova_compute[117413]: 2025-10-08 16:56:09.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:10 compute-0 podman[160541]: 2025-10-08 16:56:10.494072067 +0000 UTC m=+0.091866015 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, container_name=openstack_network_exporter, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, distribution-scope=public, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Oct 08 16:56:12 compute-0 nova_compute[117413]: 2025-10-08 16:56:12.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:14 compute-0 nova_compute[117413]: 2025-10-08 16:56:14.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:17 compute-0 podman[160562]: 2025-10-08 16:56:17.470652318 +0000 UTC m=+0.079588812 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 08 16:56:17 compute-0 podman[160563]: 2025-10-08 16:56:17.472405228 +0000 UTC m=+0.078476880 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251007, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest)
Oct 08 16:56:17 compute-0 nova_compute[117413]: 2025-10-08 16:56:17.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:19 compute-0 nova_compute[117413]: 2025-10-08 16:56:19.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:22 compute-0 nova_compute[117413]: 2025-10-08 16:56:22.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:23 compute-0 podman[160600]: 2025-10-08 16:56:23.482856514 +0000 UTC m=+0.075020952 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 08 16:56:23 compute-0 podman[160601]: 2025-10-08 16:56:23.53852789 +0000 UTC m=+0.126089176 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_id=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2)
Oct 08 16:56:24 compute-0 nova_compute[117413]: 2025-10-08 16:56:24.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:27 compute-0 nova_compute[117413]: 2025-10-08 16:56:27.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:29 compute-0 nova_compute[117413]: 2025-10-08 16:56:29.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:56:29 compute-0 nova_compute[117413]: 2025-10-08 16:56:29.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:29 compute-0 podman[127881]: time="2025-10-08T16:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:56:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:56:29 compute-0 podman[127881]: @ - - [08/Oct/2025:16:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3033 "" "Go-http-client/1.1"
Oct 08 16:56:29 compute-0 nova_compute[117413]: 2025-10-08 16:56:29.902 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:56:29 compute-0 nova_compute[117413]: 2025-10-08 16:56:29.902 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:56:29 compute-0 nova_compute[117413]: 2025-10-08 16:56:29.903 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:56:29 compute-0 nova_compute[117413]: 2025-10-08 16:56:29.903 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:937
Oct 08 16:56:30 compute-0 nova_compute[117413]: 2025-10-08 16:56:30.155 2 WARNING nova.virt.libvirt.driver [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 08 16:56:30 compute-0 nova_compute[117413]: 2025-10-08 16:56:30.156 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:349
Oct 08 16:56:30 compute-0 nova_compute[117413]: 2025-10-08 16:56:30.192 2 DEBUG oslo_concurrency.processutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CMD "env LANG=C uptime" returned: 0 in 0.035s execute /usr/lib/python3.12/site-packages/oslo_concurrency/processutils.py:372
Oct 08 16:56:30 compute-0 nova_compute[117413]: 2025-10-08 16:56:30.192 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6136MB free_disk=73.24147415161133GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1136
Oct 08 16:56:30 compute-0 nova_compute[117413]: 2025-10-08 16:56:30.193 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:56:30 compute-0 nova_compute[117413]: 2025-10-08 16:56:30.193 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:56:31 compute-0 nova_compute[117413]: 2025-10-08 16:56:31.257 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1159
Oct 08 16:56:31 compute-0 nova_compute[117413]: 2025-10-08 16:56:31.258 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] stats={'failed_builds': '0', 'uptime': ' 16:56:30 up  1:04,  0 user,  load average: 0.02, 0.18, 0.21\n'} _report_final_resource_view /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1168
Oct 08 16:56:31 compute-0 nova_compute[117413]: 2025-10-08 16:56:31.279 2 DEBUG nova.compute.provider_tree [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed in ProviderTree for provider: 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 update_inventory /usr/lib/python3.12/site-packages/nova/compute/provider_tree.py:180
Oct 08 16:56:31 compute-0 openstack_network_exporter[130039]: ERROR   16:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:56:31 compute-0 openstack_network_exporter[130039]: ERROR   16:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:56:31 compute-0 openstack_network_exporter[130039]: ERROR   16:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:56:31 compute-0 openstack_network_exporter[130039]: ERROR   16:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:56:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:56:31 compute-0 openstack_network_exporter[130039]: ERROR   16:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:56:31 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:56:31 compute-0 nova_compute[117413]: 2025-10-08 16:56:31.788 2 DEBUG nova.scheduler.client.report [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Inventory has not changed for provider 9e0c638c-76e7-4854-b60d-5cdf0cf938b8 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.12/site-packages/nova/scheduler/client/report.py:958
Oct 08 16:56:32 compute-0 nova_compute[117413]: 2025-10-08 16:56:32.301 2 DEBUG nova.compute.resource_tracker [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.12/site-packages/nova/compute/resource_tracker.py:1097
Oct 08 16:56:32 compute-0 nova_compute[117413]: 2025-10-08 16:56:32.302 2 DEBUG oslo_concurrency.lockutils [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.109s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:56:32 compute-0 nova_compute[117413]: 2025-10-08 16:56:32.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:33 compute-0 nova_compute[117413]: 2025-10-08 16:56:33.303 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:56:33 compute-0 nova_compute[117413]: 2025-10-08 16:56:33.304 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:56:33 compute-0 nova_compute[117413]: 2025-10-08 16:56:33.359 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:56:34 compute-0 nova_compute[117413]: 2025-10-08 16:56:34.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:56:34 compute-0 nova_compute[117413]: 2025-10-08 16:56:34.362 2 DEBUG nova.compute.manager [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.12/site-packages/nova/compute/manager.py:11228
Oct 08 16:56:34 compute-0 nova_compute[117413]: 2025-10-08 16:56:34.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:36 compute-0 podman[160651]: 2025-10-08 16:56:36.480445644 +0000 UTC m=+0.076806153 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 08 16:56:37 compute-0 nova_compute[117413]: 2025-10-08 16:56:37.363 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:56:37 compute-0 nova_compute[117413]: 2025-10-08 16:56:37.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:39 compute-0 nova_compute[117413]: 2025-10-08 16:56:39.362 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:56:39 compute-0 nova_compute[117413]: 2025-10-08 16:56:39.365 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:56:39 compute-0 nova_compute[117413]: 2025-10-08 16:56:39.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:41 compute-0 podman[160671]: 2025-10-08 16:56:41.467366979 +0000 UTC m=+0.072296103 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm)
Oct 08 16:56:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:56:41.963 28633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:405
Oct 08 16:56:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:56:41.963 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:410
Oct 08 16:56:41 compute-0 ovn_metadata_agent[28628]: 2025-10-08 16:56:41.964 28633 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.12/site-packages/oslo_concurrency/lockutils.py:424
Oct 08 16:56:42 compute-0 nova_compute[117413]: 2025-10-08 16:56:42.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:44 compute-0 nova_compute[117413]: 2025-10-08 16:56:44.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:47 compute-0 nova_compute[117413]: 2025-10-08 16:56:47.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:48 compute-0 podman[160695]: 2025-10-08 16:56:48.483199496 +0000 UTC m=+0.080266912 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=watcher_latest, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 08 16:56:48 compute-0 podman[160694]: 2025-10-08 16:56:48.48823763 +0000 UTC m=+0.089643171 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct 08 16:56:49 compute-0 nova_compute[117413]: 2025-10-08 16:56:49.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:50 compute-0 nova_compute[117413]: 2025-10-08 16:56:50.361 2 DEBUG oslo_service.periodic_task [None req-3a4217f7-cfcc-4ecb-8abf-6abecc55dded - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.12/site-packages/oslo_service/periodic_task.py:210
Oct 08 16:56:52 compute-0 nova_compute[117413]: 2025-10-08 16:56:52.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:54 compute-0 nova_compute[117413]: 2025-10-08 16:56:54.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:54 compute-0 podman[160734]: 2025-10-08 16:56:54.485094107 +0000 UTC m=+0.077829202 container health_status 53a5e5afdbd3a4649b5d310c0d6b9f3109242765a68834eff8c17df7ad40e913 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 08 16:56:54 compute-0 podman[160735]: 2025-10-08 16:56:54.543174742 +0000 UTC m=+0.126486327 container health_status de62403ec6a7f39db4ac9e06dbc3fd4e870a2c108f10e6e2ead0a2de244c1ff4 (image=38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, config_id=ovn_controller)
Oct 08 16:56:57 compute-0 nova_compute[117413]: 2025-10-08 16:56:57.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:59 compute-0 nova_compute[117413]: 2025-10-08 16:56:59.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:56:59 compute-0 podman[127881]: time="2025-10-08T16:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 08 16:56:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19538 "" "Go-http-client/1.1"
Oct 08 16:56:59 compute-0 podman[127881]: @ - - [08/Oct/2025:16:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3037 "" "Go-http-client/1.1"
Oct 08 16:57:01 compute-0 openstack_network_exporter[130039]: ERROR   16:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 08 16:57:01 compute-0 openstack_network_exporter[130039]: ERROR   16:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:57:01 compute-0 openstack_network_exporter[130039]: ERROR   16:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 08 16:57:01 compute-0 openstack_network_exporter[130039]: ERROR   16:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 08 16:57:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:57:01 compute-0 openstack_network_exporter[130039]: ERROR   16:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 08 16:57:01 compute-0 openstack_network_exporter[130039]: 
Oct 08 16:57:02 compute-0 nova_compute[117413]: 2025-10-08 16:57:02.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:57:04 compute-0 nova_compute[117413]: 2025-10-08 16:57:04.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:57:07 compute-0 podman[160784]: 2025-10-08 16:57:07.469769065 +0000 UTC m=+0.075387962 container health_status 02510189ec10228b5885fbeeb65f1b1be89ce7cfb3039a522b480dc0f1dac214 (image=38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=watcher_latest, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251007, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-multipathd:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 08 16:57:07 compute-0 nova_compute[117413]: 2025-10-08 16:57:07.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:57:09 compute-0 nova_compute[117413]: 2025-10-08 16:57:09.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:57:09 compute-0 sshd-session[160804]: Accepted publickey for zuul from 192.168.122.10 port 39688 ssh2: ECDSA SHA256:ZIjHNHNxAuv0z7dTwV8SzPT4xe1+IFvqH/0VmHWdIl4
Oct 08 16:57:09 compute-0 systemd-logind[847]: New session 23 of user zuul.
Oct 08 16:57:09 compute-0 systemd[1]: Created slice User Slice of UID 1000.
Oct 08 16:57:09 compute-0 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 08 16:57:09 compute-0 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 08 16:57:09 compute-0 systemd[1]: Starting User Manager for UID 1000...
Oct 08 16:57:09 compute-0 systemd[160808]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 16:57:09 compute-0 systemd[160808]: Queued start job for default target Main User Target.
Oct 08 16:57:09 compute-0 sshd-session[160804]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 08 16:57:09 compute-0 systemd[160808]: Created slice User Application Slice.
Oct 08 16:57:09 compute-0 systemd[160808]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 08 16:57:09 compute-0 systemd[160808]: Started Daily Cleanup of User's Temporary Directories.
Oct 08 16:57:09 compute-0 systemd[160808]: Reached target Paths.
Oct 08 16:57:09 compute-0 systemd[160808]: Reached target Timers.
Oct 08 16:57:09 compute-0 systemd[160808]: Starting D-Bus User Message Bus Socket...
Oct 08 16:57:09 compute-0 systemd[160808]: Starting Create User's Volatile Files and Directories...
Oct 08 16:57:09 compute-0 systemd[160808]: Listening on D-Bus User Message Bus Socket.
Oct 08 16:57:09 compute-0 systemd[160808]: Reached target Sockets.
Oct 08 16:57:09 compute-0 systemd[160808]: Finished Create User's Volatile Files and Directories.
Oct 08 16:57:09 compute-0 systemd[160808]: Reached target Basic System.
Oct 08 16:57:09 compute-0 systemd[160808]: Reached target Main User Target.
Oct 08 16:57:09 compute-0 systemd[160808]: Startup finished in 168ms.
Oct 08 16:57:09 compute-0 systemd[1]: Started User Manager for UID 1000.
Oct 08 16:57:09 compute-0 systemd[1]: Started Session 23 of User zuul.
Oct 08 16:57:10 compute-0 sudo[160824]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 08 16:57:10 compute-0 sudo[160824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 08 16:57:12 compute-0 podman[160960]: 2025-10-08 16:57:12.504045287 +0000 UTC m=+0.095038224 container health_status 3a6cabaabddaf7f16f113fbdfe3fd163db0eba7aaa27431aa403234d6f00de86 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Oct 08 16:57:12 compute-0 nova_compute[117413]: 2025-10-08 16:57:12.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:57:14 compute-0 nova_compute[117413]: 2025-10-08 16:57:14.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:57:14 compute-0 ovs-vsctl[161013]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 08 16:57:16 compute-0 virtqemud[117740]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 08 16:57:16 compute-0 virtqemud[117740]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 08 16:57:16 compute-0 virtqemud[117740]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 08 16:57:17 compute-0 nova_compute[117413]: 2025-10-08 16:57:17.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:57:17 compute-0 crontab[161441]: (root) LIST (root)
Oct 08 16:57:19 compute-0 nova_compute[117413]: 2025-10-08 16:57:19.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
Oct 08 16:57:19 compute-0 podman[161524]: 2025-10-08 16:57:19.483511352 +0000 UTC m=+0.071990475 container health_status c86ff8ee8e87753a64d5a2d80cdb55ec345bcd650500201156cccd2d738cdf3a (image=38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251007, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, tcib_build_tag=watcher_latest, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-neutron-metadata-agent-ovn:watcher_latest', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 08 16:57:19 compute-0 podman[161523]: 2025-10-08 16:57:19.493136768 +0000 UTC m=+0.087098358 container health_status 5aa736c122d09a64a79d37fd3246bab73a68cf2418cf0f50a8ac751fb032f9b0 (image=38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=watcher_latest, tcib_managed=true, org.label-schema.build-date=20251007, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': '38.102.83.163:5001/podified-master-centos10/openstack-iscsid:watcher_latest', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 08 16:57:20 compute-0 systemd[1]: Starting Hostname Service...
Oct 08 16:57:20 compute-0 systemd[1]: Started Hostname Service.
Oct 08 16:57:22 compute-0 nova_compute[117413]: 2025-10-08 16:57:22.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.12/site-packages/ovs/poller.py:263
